• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1 //
2 // Copyright (c) 2017-2022 Advanced Micro Devices, Inc. All rights reserved.
3 //
4 // Permission is hereby granted, free of charge, to any person obtaining a copy
5 // of this software and associated documentation files (the "Software"), to deal
6 // in the Software without restriction, including without limitation the rights
7 // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
8 // copies of the Software, and to permit persons to whom the Software is
9 // furnished to do so, subject to the following conditions:
10 //
11 // The above copyright notice and this permission notice shall be included in
12 // all copies or substantial portions of the Software.
13 //
14 // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
15 // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
16 // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL THE
17 // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
18 // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
19 // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
20 // THE SOFTWARE.
21 //
22 
23 #ifndef AMD_VULKAN_MEMORY_ALLOCATOR_H
24 #define AMD_VULKAN_MEMORY_ALLOCATOR_H
25 
26 /** \mainpage Vulkan Memory Allocator
27 
28 <b>Version 3.0.1-development (2022-03-28)</b>
29 
30 Copyright (c) 2017-2022 Advanced Micro Devices, Inc. All rights reserved. \n
31 License: MIT
32 
33 <b>API documentation divided into groups:</b> [Modules](modules.html)
34 
35 \section main_table_of_contents Table of contents
36 
37 - <b>User guide</b>
38   - \subpage quick_start
39     - [Project setup](@ref quick_start_project_setup)
40     - [Initialization](@ref quick_start_initialization)
41     - [Resource allocation](@ref quick_start_resource_allocation)
42   - \subpage choosing_memory_type
43     - [Usage](@ref choosing_memory_type_usage)
44     - [Required and preferred flags](@ref choosing_memory_type_required_preferred_flags)
45     - [Explicit memory types](@ref choosing_memory_type_explicit_memory_types)
46     - [Custom memory pools](@ref choosing_memory_type_custom_memory_pools)
47     - [Dedicated allocations](@ref choosing_memory_type_dedicated_allocations)
48   - \subpage memory_mapping
49     - [Mapping functions](@ref memory_mapping_mapping_functions)
50     - [Persistently mapped memory](@ref memory_mapping_persistently_mapped_memory)
51     - [Cache flush and invalidate](@ref memory_mapping_cache_control)
52   - \subpage staying_within_budget
53     - [Querying for budget](@ref staying_within_budget_querying_for_budget)
54     - [Controlling memory usage](@ref staying_within_budget_controlling_memory_usage)
55   - \subpage resource_aliasing
56   - \subpage custom_memory_pools
57     - [Choosing memory type index](@ref custom_memory_pools_MemTypeIndex)
58     - [Linear allocation algorithm](@ref linear_algorithm)
59       - [Free-at-once](@ref linear_algorithm_free_at_once)
60       - [Stack](@ref linear_algorithm_stack)
61       - [Double stack](@ref linear_algorithm_double_stack)
62       - [Ring buffer](@ref linear_algorithm_ring_buffer)
63   - \subpage defragmentation
64   - \subpage statistics
65     - [Numeric statistics](@ref statistics_numeric_statistics)
66     - [JSON dump](@ref statistics_json_dump)
67   - \subpage allocation_annotation
68     - [Allocation user data](@ref allocation_user_data)
69     - [Allocation names](@ref allocation_names)
70   - \subpage virtual_allocator
71   - \subpage debugging_memory_usage
72     - [Memory initialization](@ref debugging_memory_usage_initialization)
73     - [Margins](@ref debugging_memory_usage_margins)
74     - [Corruption detection](@ref debugging_memory_usage_corruption_detection)
75   - \subpage opengl_interop
76 - \subpage usage_patterns
77     - [GPU-only resource](@ref usage_patterns_gpu_only)
78     - [Staging copy for upload](@ref usage_patterns_staging_copy_upload)
79     - [Readback](@ref usage_patterns_readback)
80     - [Advanced data uploading](@ref usage_patterns_advanced_data_uploading)
81     - [Other use cases](@ref usage_patterns_other_use_cases)
82 - \subpage configuration
83   - [Pointers to Vulkan functions](@ref config_Vulkan_functions)
84   - [Custom host memory allocator](@ref custom_memory_allocator)
85   - [Device memory allocation callbacks](@ref allocation_callbacks)
86   - [Device heap memory limit](@ref heap_memory_limit)
87 - <b>Extension support</b>
88     - \subpage vk_khr_dedicated_allocation
89     - \subpage enabling_buffer_device_address
90     - \subpage vk_ext_memory_priority
91     - \subpage vk_amd_device_coherent_memory
92 - \subpage general_considerations
93   - [Thread safety](@ref general_considerations_thread_safety)
94   - [Versioning and compatibility](@ref general_considerations_versioning_and_compatibility)
95   - [Validation layer warnings](@ref general_considerations_validation_layer_warnings)
96   - [Allocation algorithm](@ref general_considerations_allocation_algorithm)
97   - [Features not supported](@ref general_considerations_features_not_supported)
98 
99 \section main_see_also See also
100 
101 - [**Product page on GPUOpen**](https://gpuopen.com/gaming-product/vulkan-memory-allocator/)
102 - [**Source repository on GitHub**](https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator)
103 
104 \defgroup group_init Library initialization
105 
106 \brief API elements related to the initialization and management of the entire library, especially #VmaAllocator object.
107 
108 \defgroup group_alloc Memory allocation
109 
110 \brief API elements related to the allocation, deallocation, and management of Vulkan memory, buffers, images.
111 Most basic ones being: vmaCreateBuffer(), vmaCreateImage().
112 
113 \defgroup group_virtual Virtual allocator
114 
115 \brief API elements related to the mechanism of \ref virtual_allocator - using the core allocation algorithm
116 for user-defined purpose without allocating any real GPU memory.
117 
118 \defgroup group_stats Statistics
119 
120 \brief API elements that query current status of the allocator, from memory usage, budget, to full dump of the internal state in JSON format.
121 See documentation chapter: \ref statistics.
122 */
123 
124 
125 #ifdef __cplusplus
126 extern "C" {
127 #endif
128 
129 #ifndef VULKAN_H_
130     #include <vulkan/vulkan.h>
131 #endif
132 
133 // Define this macro to declare maximum supported Vulkan version in format AAABBBCCC,
134 // where AAA = major, BBB = minor, CCC = patch.
135 // If you want to use version > 1.0, it still needs to be enabled via VmaAllocatorCreateInfo::vulkanApiVersion.
136 #if !defined(VMA_VULKAN_VERSION)
137     #if defined(VK_VERSION_1_3)
138         #define VMA_VULKAN_VERSION 1003000
139     #elif defined(VK_VERSION_1_2)
140         #define VMA_VULKAN_VERSION 1002000
141     #elif defined(VK_VERSION_1_1)
142         #define VMA_VULKAN_VERSION 1001000
143     #else
144         #define VMA_VULKAN_VERSION 1000000
145     #endif
146 #endif
147 
148 #if defined(__ANDROID__) && defined(VK_NO_PROTOTYPES) && VMA_STATIC_VULKAN_FUNCTIONS
149     extern PFN_vkGetInstanceProcAddr vkGetInstanceProcAddr;
150     extern PFN_vkGetDeviceProcAddr vkGetDeviceProcAddr;
151     extern PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties;
152     extern PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties;
153     extern PFN_vkAllocateMemory vkAllocateMemory;
154     extern PFN_vkFreeMemory vkFreeMemory;
155     extern PFN_vkMapMemory vkMapMemory;
156     extern PFN_vkUnmapMemory vkUnmapMemory;
157     extern PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges;
158     extern PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges;
159     extern PFN_vkBindBufferMemory vkBindBufferMemory;
160     extern PFN_vkBindImageMemory vkBindImageMemory;
161     extern PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements;
162     extern PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements;
163     extern PFN_vkCreateBuffer vkCreateBuffer;
164     extern PFN_vkDestroyBuffer vkDestroyBuffer;
165     extern PFN_vkCreateImage vkCreateImage;
166     extern PFN_vkDestroyImage vkDestroyImage;
167     extern PFN_vkCmdCopyBuffer vkCmdCopyBuffer;
168     #if VMA_VULKAN_VERSION >= 1001000
169         extern PFN_vkGetBufferMemoryRequirements2 vkGetBufferMemoryRequirements2;
170         extern PFN_vkGetImageMemoryRequirements2 vkGetImageMemoryRequirements2;
171         extern PFN_vkBindBufferMemory2 vkBindBufferMemory2;
172         extern PFN_vkBindImageMemory2 vkBindImageMemory2;
173         extern PFN_vkGetPhysicalDeviceMemoryProperties2 vkGetPhysicalDeviceMemoryProperties2;
174     #endif // #if VMA_VULKAN_VERSION >= 1001000
175 #endif // #if defined(__ANDROID__) && VMA_STATIC_VULKAN_FUNCTIONS && VK_NO_PROTOTYPES
176 
177 #if !defined(VMA_DEDICATED_ALLOCATION)
178     #if VK_KHR_get_memory_requirements2 && VK_KHR_dedicated_allocation
179         #define VMA_DEDICATED_ALLOCATION 1
180     #else
181         #define VMA_DEDICATED_ALLOCATION 0
182     #endif
183 #endif
184 
185 #if !defined(VMA_BIND_MEMORY2)
186     #if VK_KHR_bind_memory2
187         #define VMA_BIND_MEMORY2 1
188     #else
189         #define VMA_BIND_MEMORY2 0
190     #endif
191 #endif
192 
193 #if !defined(VMA_MEMORY_BUDGET)
194     #if VK_EXT_memory_budget && (VK_KHR_get_physical_device_properties2 || VMA_VULKAN_VERSION >= 1001000)
195         #define VMA_MEMORY_BUDGET 1
196     #else
197         #define VMA_MEMORY_BUDGET 0
198     #endif
199 #endif
200 
201 // Defined to 1 when VK_KHR_buffer_device_address device extension or equivalent core Vulkan 1.2 feature is defined in its headers.
202 #if !defined(VMA_BUFFER_DEVICE_ADDRESS)
203     #if VK_KHR_buffer_device_address || VMA_VULKAN_VERSION >= 1002000
204         #define VMA_BUFFER_DEVICE_ADDRESS 1
205     #else
206         #define VMA_BUFFER_DEVICE_ADDRESS 0
207     #endif
208 #endif
209 
210 // Defined to 1 when VK_EXT_memory_priority device extension is defined in Vulkan headers.
211 #if !defined(VMA_MEMORY_PRIORITY)
212     #if VK_EXT_memory_priority
213         #define VMA_MEMORY_PRIORITY 1
214     #else
215         #define VMA_MEMORY_PRIORITY 0
216     #endif
217 #endif
218 
219 // Defined to 1 when VK_KHR_external_memory device extension is defined in Vulkan headers.
220 #if !defined(VMA_EXTERNAL_MEMORY)
221     #if VK_KHR_external_memory
222         #define VMA_EXTERNAL_MEMORY 1
223     #else
224         #define VMA_EXTERNAL_MEMORY 0
225     #endif
226 #endif
227 
228 // Define these macros to decorate all public functions with additional code,
229 // before and after returned type, appropriately. This may be useful for
230 // exporting the functions when compiling VMA as a separate library. Example:
231 // #define VMA_CALL_PRE  __declspec(dllexport)
232 // #define VMA_CALL_POST __cdecl
233 #ifndef VMA_CALL_PRE
234     #define VMA_CALL_PRE
235 #endif
236 #ifndef VMA_CALL_POST
237     #define VMA_CALL_POST
238 #endif
239 
240 // Define this macro to decorate pointers with an attribute specifying the
241 // length of the array they point to if they are not null.
242 //
243 // The length may be one of
244 // - The name of another parameter in the argument list where the pointer is declared
245 // - The name of another member in the struct where the pointer is declared
246 // - The name of a member of a struct type, meaning the value of that member in
247 //   the context of the call. For example
248 //   VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryHeapCount"),
249 //   this means the number of memory heaps available in the device associated
250 //   with the VmaAllocator being dealt with.
251 #ifndef VMA_LEN_IF_NOT_NULL
252     #define VMA_LEN_IF_NOT_NULL(len)
253 #endif
254 
255 // The VMA_NULLABLE macro is defined to be _Nullable when compiling with Clang.
256 // see: https://clang.llvm.org/docs/AttributeReference.html#nullable
257 #ifndef VMA_NULLABLE
258     #ifdef __clang__
259         #define VMA_NULLABLE _Nullable
260     #else
261         #define VMA_NULLABLE
262     #endif
263 #endif
264 
265 // The VMA_NOT_NULL macro is defined to be _Nonnull when compiling with Clang.
266 // see: https://clang.llvm.org/docs/AttributeReference.html#nonnull
267 #ifndef VMA_NOT_NULL
268     #ifdef __clang__
269         #define VMA_NOT_NULL _Nonnull
270     #else
271         #define VMA_NOT_NULL
272     #endif
273 #endif
274 
275 // If non-dispatchable handles are represented as pointers then we can give
276 // then nullability annotations
277 #ifndef VMA_NOT_NULL_NON_DISPATCHABLE
278     #if defined(__LP64__) || defined(_WIN64) || (defined(__x86_64__) && !defined(__ILP32__) ) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
279         #define VMA_NOT_NULL_NON_DISPATCHABLE VMA_NOT_NULL
280     #else
281         #define VMA_NOT_NULL_NON_DISPATCHABLE
282     #endif
283 #endif
284 
285 #ifndef VMA_NULLABLE_NON_DISPATCHABLE
286     #if defined(__LP64__) || defined(_WIN64) || (defined(__x86_64__) && !defined(__ILP32__) ) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
287         #define VMA_NULLABLE_NON_DISPATCHABLE VMA_NULLABLE
288     #else
289         #define VMA_NULLABLE_NON_DISPATCHABLE
290     #endif
291 #endif
292 
293 #ifndef VMA_STATS_STRING_ENABLED
294     #define VMA_STATS_STRING_ENABLED 1
295 #endif
296 
297 ////////////////////////////////////////////////////////////////////////////////
298 ////////////////////////////////////////////////////////////////////////////////
299 //
300 //    INTERFACE
301 //
302 ////////////////////////////////////////////////////////////////////////////////
303 ////////////////////////////////////////////////////////////////////////////////
304 
305 // Sections for managing code placement in file, only for development purposes e.g. for convenient folding inside an IDE.
306 #ifndef _VMA_ENUM_DECLARATIONS
307 
308 /**
309 \addtogroup group_init
310 @{
311 */
312 
313 /// Flags for created #VmaAllocator.
314 typedef enum VmaAllocatorCreateFlagBits
315 {
316     /** \brief Allocator and all objects created from it will not be synchronized internally, so you must guarantee they are used from only one thread at a time or synchronized externally by you.
317 
318     Using this flag may increase performance because internal mutexes are not used.
319     */
320     VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT = 0x00000001,
321     /** \brief Enables usage of VK_KHR_dedicated_allocation extension.
322 
323     The flag works only if VmaAllocatorCreateInfo::vulkanApiVersion `== VK_API_VERSION_1_0`.
324     When it is `VK_API_VERSION_1_1`, the flag is ignored because the extension has been promoted to Vulkan 1.1.
325 
326     Using this extension will automatically allocate dedicated blocks of memory for
327     some buffers and images instead of suballocating place for them out of bigger
328     memory blocks (as if you explicitly used #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT
329     flag) when it is recommended by the driver. It may improve performance on some
330     GPUs.
331 
332     You may set this flag only if you found out that following device extensions are
333     supported, you enabled them while creating Vulkan device passed as
334     VmaAllocatorCreateInfo::device, and you want them to be used internally by this
335     library:
336 
337     - VK_KHR_get_memory_requirements2 (device extension)
338     - VK_KHR_dedicated_allocation (device extension)
339 
340     When this flag is set, you can experience following warnings reported by Vulkan
341     validation layer. You can ignore them.
342 
343     > vkBindBufferMemory(): Binding memory to buffer 0x2d but vkGetBufferMemoryRequirements() has not been called on that buffer.
344     */
345     VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT = 0x00000002,
346     /**
347     Enables usage of VK_KHR_bind_memory2 extension.
348 
349     The flag works only if VmaAllocatorCreateInfo::vulkanApiVersion `== VK_API_VERSION_1_0`.
350     When it is `VK_API_VERSION_1_1`, the flag is ignored because the extension has been promoted to Vulkan 1.1.
351 
352     You may set this flag only if you found out that this device extension is supported,
353     you enabled it while creating Vulkan device passed as VmaAllocatorCreateInfo::device,
354     and you want it to be used internally by this library.
355 
356     The extension provides functions `vkBindBufferMemory2KHR` and `vkBindImageMemory2KHR`,
357     which allow to pass a chain of `pNext` structures while binding.
358     This flag is required if you use `pNext` parameter in vmaBindBufferMemory2() or vmaBindImageMemory2().
359     */
360     VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT = 0x00000004,
361     /**
362     Enables usage of VK_EXT_memory_budget extension.
363 
364     You may set this flag only if you found out that this device extension is supported,
365     you enabled it while creating Vulkan device passed as VmaAllocatorCreateInfo::device,
366     and you want it to be used internally by this library, along with another instance extension
367     VK_KHR_get_physical_device_properties2, which is required by it (or Vulkan 1.1, where this extension is promoted).
368 
369     The extension provides query for current memory usage and budget, which will probably
370     be more accurate than an estimation used by the library otherwise.
371     */
372     VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT = 0x00000008,
373     /**
374     Enables usage of VK_AMD_device_coherent_memory extension.
375 
376     You may set this flag only if you:
377 
378     - found out that this device extension is supported and enabled it while creating Vulkan device passed as VmaAllocatorCreateInfo::device,
379     - checked that `VkPhysicalDeviceCoherentMemoryFeaturesAMD::deviceCoherentMemory` is true and set it while creating the Vulkan device,
380     - want it to be used internally by this library.
381 
382     The extension and accompanying device feature provide access to memory types with
383     `VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD` and `VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD` flags.
384     They are useful mostly for writing breadcrumb markers - a common method for debugging GPU crash/hang/TDR.
385 
386     When the extension is not enabled, such memory types are still enumerated, but their usage is illegal.
387     To protect from this error, if you don't create the allocator with this flag, it will refuse to allocate any memory or create a custom pool in such memory type,
388     returning `VK_ERROR_FEATURE_NOT_PRESENT`.
389     */
390     VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT = 0x00000010,
391     /**
392     Enables usage of "buffer device address" feature, which allows you to use function
393     `vkGetBufferDeviceAddress*` to get raw GPU pointer to a buffer and pass it for usage inside a shader.
394 
395     You may set this flag only if you:
396 
397     1. (For Vulkan version < 1.2) Found as available and enabled device extension
398     VK_KHR_buffer_device_address.
399     This extension is promoted to core Vulkan 1.2.
400     2. Found as available and enabled device feature `VkPhysicalDeviceBufferDeviceAddressFeatures::bufferDeviceAddress`.
401 
402     When this flag is set, you can create buffers with `VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT` using VMA.
403     The library automatically adds `VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT` to
404     allocated memory blocks wherever it might be needed.
405 
406     For more information, see documentation chapter \ref enabling_buffer_device_address.
407     */
408     VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT = 0x00000020,
409     /**
410     Enables usage of VK_EXT_memory_priority extension in the library.
411 
412     You may set this flag only if you found available and enabled this device extension,
413     along with `VkPhysicalDeviceMemoryPriorityFeaturesEXT::memoryPriority == VK_TRUE`,
414     while creating Vulkan device passed as VmaAllocatorCreateInfo::device.
415 
416     When this flag is used, VmaAllocationCreateInfo::priority and VmaPoolCreateInfo::priority
417     are used to set priorities of allocated Vulkan memory. Without it, these variables are ignored.
418 
419     A priority must be a floating-point value between 0 and 1, indicating the priority of the allocation relative to other memory allocations.
420     Larger values are higher priority. The granularity of the priorities is implementation-dependent.
421     It is automatically passed to every call to `vkAllocateMemory` done by the library using structure `VkMemoryPriorityAllocateInfoEXT`.
422     The value to be used for default priority is 0.5.
423     For more details, see the documentation of the VK_EXT_memory_priority extension.
424     */
425     VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT = 0x00000040,
426 
427     VMA_ALLOCATOR_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
428 } VmaAllocatorCreateFlagBits;
429 /// See #VmaAllocatorCreateFlagBits.
430 typedef VkFlags VmaAllocatorCreateFlags;
431 
432 /** @} */
433 
434 /**
435 \addtogroup group_alloc
436 @{
437 */
438 
439 /// \brief Intended usage of the allocated memory.
440 typedef enum VmaMemoryUsage
441 {
442     /** No intended memory usage specified.
443     Use other members of VmaAllocationCreateInfo to specify your requirements.
444     */
445     VMA_MEMORY_USAGE_UNKNOWN = 0,
446     /**
447     \deprecated Obsolete, preserved for backward compatibility.
448     Prefers `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
449     */
450     VMA_MEMORY_USAGE_GPU_ONLY = 1,
451     /**
452     \deprecated Obsolete, preserved for backward compatibility.
453     Guarantees `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT` and `VK_MEMORY_PROPERTY_HOST_COHERENT_BIT`.
454     */
455     VMA_MEMORY_USAGE_CPU_ONLY = 2,
456     /**
457     \deprecated Obsolete, preserved for backward compatibility.
458     Guarantees `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`, prefers `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
459     */
460     VMA_MEMORY_USAGE_CPU_TO_GPU = 3,
461     /**
462     \deprecated Obsolete, preserved for backward compatibility.
463     Guarantees `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`, prefers `VK_MEMORY_PROPERTY_HOST_CACHED_BIT`.
464     */
465     VMA_MEMORY_USAGE_GPU_TO_CPU = 4,
466     /**
467     \deprecated Obsolete, preserved for backward compatibility.
468     Prefers not `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
469     */
470     VMA_MEMORY_USAGE_CPU_COPY = 5,
471     /**
472     Lazily allocated GPU memory having `VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT`.
473     Exists mostly on mobile platforms. Using it on desktop PC or other GPUs with no such memory type present will fail the allocation.
474 
475     Usage: Memory for transient attachment images (color attachments, depth attachments etc.), created with `VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT`.
476 
477     Allocations with this usage are always created as dedicated - it implies #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
478     */
479     VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED = 6,
480     /**
481     Selects best memory type automatically.
482     This flag is recommended for most common use cases.
483 
484     When using this flag, if you want to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT),
485     you must pass one of the flags: #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
486     in VmaAllocationCreateInfo::flags.
487 
488     It can be used only with functions that let the library know `VkBufferCreateInfo` or `VkImageCreateInfo`, e.g.
489     vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo()
490     and not with generic memory allocation functions.
491     */
492     VMA_MEMORY_USAGE_AUTO = 7,
493     /**
494     Selects best memory type automatically with preference for GPU (device) memory.
495 
496     When using this flag, if you want to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT),
497     you must pass one of the flags: #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
498     in VmaAllocationCreateInfo::flags.
499 
500     It can be used only with functions that let the library know `VkBufferCreateInfo` or `VkImageCreateInfo`, e.g.
501     vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo()
502     and not with generic memory allocation functions.
503     */
504     VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE = 8,
505     /**
506     Selects best memory type automatically with preference for CPU (host) memory.
507 
508     When using this flag, if you want to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT),
509     you must pass one of the flags: #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
510     in VmaAllocationCreateInfo::flags.
511 
512     It can be used only with functions that let the library know `VkBufferCreateInfo` or `VkImageCreateInfo`, e.g.
513     vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo()
514     and not with generic memory allocation functions.
515     */
516     VMA_MEMORY_USAGE_AUTO_PREFER_HOST = 9,
517 
518     VMA_MEMORY_USAGE_MAX_ENUM = 0x7FFFFFFF
519 } VmaMemoryUsage;
520 
521 /// Flags to be passed as VmaAllocationCreateInfo::flags.
522 typedef enum VmaAllocationCreateFlagBits
523 {
524     /** \brief Set this flag if the allocation should have its own memory block.
525 
526     Use it for special, big resources, like fullscreen images used as attachments.
527     */
528     VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT = 0x00000001,
529 
530     /** \brief Set this flag to only try to allocate from existing `VkDeviceMemory` blocks and never create new such block.
531 
532     If new allocation cannot be placed in any of the existing blocks, allocation
533     fails with `VK_ERROR_OUT_OF_DEVICE_MEMORY` error.
534 
535     You should not use #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT and
536     #VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT at the same time. It makes no sense.
537     */
538     VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT = 0x00000002,
539     /** \brief Set this flag to use a memory that will be persistently mapped and retrieve pointer to it.
540 
541     Pointer to mapped memory will be returned through VmaAllocationInfo::pMappedData.
542 
543     It is valid to use this flag for allocation made from memory type that is not
544     `HOST_VISIBLE`. This flag is then ignored and memory is not mapped. This is
545     useful if you need an allocation that is efficient to use on GPU
546     (`DEVICE_LOCAL`) and still want to map it directly if possible on platforms that
547     support it (e.g. Intel GPU).
548     */
549     VMA_ALLOCATION_CREATE_MAPPED_BIT = 0x00000004,
550     /** \deprecated Preserved for backward compatibility. Consider using vmaSetAllocationName() instead.
551 
552     Set this flag to treat VmaAllocationCreateInfo::pUserData as pointer to a
553     null-terminated string. Instead of copying pointer value, a local copy of the
554     string is made and stored in allocation's `pName`. The string is automatically
555     freed together with the allocation. It is also used in vmaBuildStatsString().
556     */
557     VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT = 0x00000020,
558     /** Allocation will be created from upper stack in a double stack pool.
559 
560     This flag is only allowed for custom pools created with #VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT flag.
561     */
562     VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT = 0x00000040,
563     /** Create both buffer/image and allocation, but don't bind them together.
564     It is useful when you want to bind yourself to do some more advanced binding, e.g. using some extensions.
565     The flag is meaningful only with functions that bind by default: vmaCreateBuffer(), vmaCreateImage().
566     Otherwise it is ignored.
567 
568     If you want to make sure the new buffer/image is not tied to the new memory allocation
569     through `VkMemoryDedicatedAllocateInfoKHR` structure in case the allocation ends up in its own memory block,
570     use also flag #VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT.
571     */
572     VMA_ALLOCATION_CREATE_DONT_BIND_BIT = 0x00000080,
573     /** Create allocation only if additional device memory required for it, if any, won't exceed
574     memory budget. Otherwise return `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
575     */
576     VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT = 0x00000100,
577     /** \brief Set this flag if the allocated memory will have aliasing resources.
578 
579     Usage of this flag prevents supplying `VkMemoryDedicatedAllocateInfoKHR` when #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT is specified.
580     Otherwise created dedicated memory will not be suitable for aliasing resources, resulting in Vulkan Validation Layer errors.
581     */
582     VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT = 0x00000200,
583     /**
584     Requests possibility to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT).
585 
586     - If you use #VMA_MEMORY_USAGE_AUTO or other `VMA_MEMORY_USAGE_AUTO*` value,
587       you must use this flag to be able to map the allocation. Otherwise, mapping is incorrect.
588     - If you use other value of #VmaMemoryUsage, this flag is ignored and mapping is always possible in memory types that are `HOST_VISIBLE`.
589       This includes allocations created in \ref custom_memory_pools.
590 
591     Declares that mapped memory will only be written sequentially, e.g. using `memcpy()` or a loop writing number-by-number,
592     never read or accessed randomly, so a memory type can be selected that is uncached and write-combined.
593 
594     \warning Violating this declaration may work correctly, but will likely be very slow.
595     Watch out for implicit reads introduced by doing e.g. `pMappedData[i] += x;`
596     Better prepare your data in a local variable and `memcpy()` it to the mapped pointer all at once.
597     */
598     VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT = 0x00000400,
599     /**
600     Requests possibility to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT).
601 
602     - If you use #VMA_MEMORY_USAGE_AUTO or other `VMA_MEMORY_USAGE_AUTO*` value,
603       you must use this flag to be able to map the allocation. Otherwise, mapping is incorrect.
604     - If you use other value of #VmaMemoryUsage, this flag is ignored and mapping is always possible in memory types that are `HOST_VISIBLE`.
605       This includes allocations created in \ref custom_memory_pools.
606 
607     Declares that mapped memory can be read, written, and accessed in random order,
608     so a `HOST_CACHED` memory type is required.
609     */
610     VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT = 0x00000800,
611     /**
612     Together with #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT,
613     it says that despite request for host access, a not-`HOST_VISIBLE` memory type can be selected
614     if it may improve performance.
615 
616     By using this flag, you declare that you will check if the allocation ended up in a `HOST_VISIBLE` memory type
617     (e.g. using vmaGetAllocationMemoryProperties()) and if not, you will create some "staging" buffer and
618     issue an explicit transfer to write/read your data.
619     To prepare for this possibility, don't forget to add appropriate flags like
620     `VK_BUFFER_USAGE_TRANSFER_DST_BIT`, `VK_BUFFER_USAGE_TRANSFER_SRC_BIT` to the parameters of created buffer or image.
621     */
622     VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT = 0x00001000,
623     /** Allocation strategy that chooses smallest possible free range for the allocation
624     to minimize memory usage and fragmentation, possibly at the expense of allocation time.
625     */
626     VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT = 0x00010000,
627     /** Allocation strategy that chooses first suitable free range for the allocation -
628     not necessarily in terms of the smallest offset but the one that is easiest and fastest to find
629     to minimize allocation time, possibly at the expense of allocation quality.
630     */
631     VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT = 0x00020000,
632     /** Allocation strategy that chooses always the lowest offset in available space.
633     This is not the most efficient strategy but achieves highly packed data.
634     Used internally by defragmentation, not recomended in typical usage.
635     */
636     VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT  = 0x00040000,
637     /** Alias to #VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT.
638     */
639     VMA_ALLOCATION_CREATE_STRATEGY_BEST_FIT_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT,
640     /** Alias to #VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT.
641     */
642     VMA_ALLOCATION_CREATE_STRATEGY_FIRST_FIT_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT,
643     /** A bit mask to extract only `STRATEGY` bits from entire set of flags.
644     */
645     VMA_ALLOCATION_CREATE_STRATEGY_MASK =
646         VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT |
647         VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT |
648         VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
649 
650     VMA_ALLOCATION_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
651 } VmaAllocationCreateFlagBits;
652 /// See #VmaAllocationCreateFlagBits.
653 typedef VkFlags VmaAllocationCreateFlags;
654 
655 /// Flags to be passed as VmaPoolCreateInfo::flags.
656 typedef enum VmaPoolCreateFlagBits
657 {
658     /** \brief Use this flag if you always allocate only buffers and linear images or only optimal images out of this pool and so Buffer-Image Granularity can be ignored.
659 
660     This is an optional optimization flag.
661 
662     If you always allocate using vmaCreateBuffer(), vmaCreateImage(),
663     vmaAllocateMemoryForBuffer(), then you don't need to use it because allocator
664     knows exact type of your allocations so it can handle Buffer-Image Granularity
665     in the optimal way.
666 
667     If you also allocate using vmaAllocateMemoryForImage() or vmaAllocateMemory(),
668     exact type of such allocations is not known, so allocator must be conservative
669     in handling Buffer-Image Granularity, which can lead to suboptimal allocation
670     (wasted memory). In that case, if you can make sure you always allocate only
671     buffers and linear images or only optimal images out of this pool, use this flag
672     to make allocator disregard Buffer-Image Granularity and so make allocations
673     faster and more optimal.
674     */
675     VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT = 0x00000002,
676 
677     /** \brief Enables alternative, linear allocation algorithm in this pool.
678 
679     Specify this flag to enable linear allocation algorithm, which always creates
680     new allocations after last one and doesn't reuse space from allocations freed in
681     between. It trades memory consumption for simplified algorithm and data
682     structure, which has better performance and uses less memory for metadata.
683 
684     By using this flag, you can achieve behavior of free-at-once, stack,
685     ring buffer, and double stack.
686     For details, see documentation chapter \ref linear_algorithm.
687     */
688     VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT = 0x00000004,
689 
690     /** Bit mask to extract only `ALGORITHM` bits from entire set of flags.
691     */
692     VMA_POOL_CREATE_ALGORITHM_MASK =
693         VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT,
694 
695     VMA_POOL_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
696 } VmaPoolCreateFlagBits;
697 /// Flags to be passed as VmaPoolCreateInfo::flags. See #VmaPoolCreateFlagBits.
698 typedef VkFlags VmaPoolCreateFlags;
699 
700 /// Flags to be passed as VmaDefragmentationInfo::flags.
701 typedef enum VmaDefragmentationFlagBits
702 {
703     /* \brief Use simple but fast algorithm for defragmentation.
704     May not achieve best results but will require least time to compute and least allocations to copy.
705     */
706     VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT = 0x1,
707     /* \brief Default defragmentation algorithm, applied also when no `ALGORITHM` flag is specified.
708     Offers a balance between defragmentation quality and the amount of allocations and bytes that need to be moved.
709     */
710     VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT = 0x2,
711     /* \brief Perform full defragmentation of memory.
712     Can result in notably more time to compute and allocations to copy, but will achieve best memory packing.
713     */
714     VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT = 0x4,
715     /** \brief Use the most roboust algorithm at the cost of time to compute and number of copies to make.
716     Only available when bufferImageGranularity is greater than 1, since it aims to reduce
717     alignment issues between different types of resources.
718     Otherwise falls back to same behavior as #VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT.
719     */
720     VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT = 0x8,
721 
722     /// A bit mask to extract only `ALGORITHM` bits from entire set of flags.
723     VMA_DEFRAGMENTATION_FLAG_ALGORITHM_MASK =
724         VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT |
725         VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT |
726         VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT |
727         VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT,
728 
729     VMA_DEFRAGMENTATION_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
730 } VmaDefragmentationFlagBits;
731 /// See #VmaDefragmentationFlagBits.
732 typedef VkFlags VmaDefragmentationFlags;
733 
734 /// Operation performed on single defragmentation move. See structure #VmaDefragmentationMove.
735 typedef enum VmaDefragmentationMoveOperation
736 {
737     /// Buffer/image has been recreated at `dstTmpAllocation`, data has been copied, old buffer/image has been destroyed. `srcAllocation` should be changed to point to the new place. This is the default value set by vmaBeginDefragmentationPass().
738     VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY = 0,
739     /// Set this value if you cannot move the allocation. New place reserved at `dstTmpAllocation` will be freed. `srcAllocation` will remain unchanged.
740     VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE = 1,
741     /// Set this value if you decide to abandon the allocation and you destroyed the buffer/image. New place reserved at `dstTmpAllocation` will be freed, along with `srcAllocation`, which will be destroyed.
742     VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY = 2,
743 } VmaDefragmentationMoveOperation;
744 
745 /** @} */
746 
747 /**
748 \addtogroup group_virtual
749 @{
750 */
751 
752 /// Flags to be passed as VmaVirtualBlockCreateInfo::flags.
753 typedef enum VmaVirtualBlockCreateFlagBits
754 {
755     /** \brief Enables alternative, linear allocation algorithm in this virtual block.
756 
757     Specify this flag to enable linear allocation algorithm, which always creates
758     new allocations after last one and doesn't reuse space from allocations freed in
759     between. It trades memory consumption for simplified algorithm and data
760     structure, which has better performance and uses less memory for metadata.
761 
762     By using this flag, you can achieve behavior of free-at-once, stack,
763     ring buffer, and double stack.
764     For details, see documentation chapter \ref linear_algorithm.
765     */
766     VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT = 0x00000001,
767 
768     /** \brief Bit mask to extract only `ALGORITHM` bits from entire set of flags.
769     */
770     VMA_VIRTUAL_BLOCK_CREATE_ALGORITHM_MASK =
771         VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT,
772 
773     VMA_VIRTUAL_BLOCK_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
774 } VmaVirtualBlockCreateFlagBits;
775 /// Flags to be passed as VmaVirtualBlockCreateInfo::flags. See #VmaVirtualBlockCreateFlagBits.
776 typedef VkFlags VmaVirtualBlockCreateFlags;
777 
778 /// Flags to be passed as VmaVirtualAllocationCreateInfo::flags.
779 typedef enum VmaVirtualAllocationCreateFlagBits
780 {
781     /** \brief Allocation will be created from upper stack in a double stack pool.
782 
783     This flag is only allowed for virtual blocks created with #VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT flag.
784     */
785     VMA_VIRTUAL_ALLOCATION_CREATE_UPPER_ADDRESS_BIT = VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT,
786     /** \brief Allocation strategy that tries to minimize memory usage.
787     */
788     VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT,
789     /** \brief Allocation strategy that tries to minimize allocation time.
790     */
791     VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT,
792     /** Allocation strategy that chooses always the lowest offset in available space.
793     This is not the most efficient strategy but achieves highly packed data.
794     */
795     VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
796     /** \brief A bit mask to extract only `STRATEGY` bits from entire set of flags.
797 
798     These strategy flags are binary compatible with equivalent flags in #VmaAllocationCreateFlagBits.
799     */
800     VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MASK = VMA_ALLOCATION_CREATE_STRATEGY_MASK,
801 
802     VMA_VIRTUAL_ALLOCATION_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
803 } VmaVirtualAllocationCreateFlagBits;
804 /// Flags to be passed as VmaVirtualAllocationCreateInfo::flags. See #VmaVirtualAllocationCreateFlagBits.
805 typedef VkFlags VmaVirtualAllocationCreateFlags;
806 
807 /** @} */
808 
809 #endif // _VMA_ENUM_DECLARATIONS
810 
811 #ifndef _VMA_DATA_TYPES_DECLARATIONS
812 
813 /**
814 \addtogroup group_init
815 @{ */
816 
817 /** \struct VmaAllocator
818 \brief Represents main object of this library initialized.
819 
820 Fill structure #VmaAllocatorCreateInfo and call function vmaCreateAllocator() to create it.
821 Call function vmaDestroyAllocator() to destroy it.
822 
823 It is recommended to create just one object of this type per `VkDevice` object,
824 right after Vulkan is initialized and keep it alive until before Vulkan device is destroyed.
825 */
826 VK_DEFINE_HANDLE(VmaAllocator)
827 
828 /** @} */
829 
830 /**
831 \addtogroup group_alloc
832 @{
833 */
834 
835 /** \struct VmaPool
836 \brief Represents custom memory pool
837 
838 Fill structure VmaPoolCreateInfo and call function vmaCreatePool() to create it.
839 Call function vmaDestroyPool() to destroy it.
840 
841 For more information see [Custom memory pools](@ref choosing_memory_type_custom_memory_pools).
842 */
843 VK_DEFINE_HANDLE(VmaPool)
844 
845 /** \struct VmaAllocation
846 \brief Represents single memory allocation.
847 
848 It may be either dedicated block of `VkDeviceMemory` or a specific region of a bigger block of this type
849 plus unique offset.
850 
851 There are multiple ways to create such object.
852 You need to fill structure VmaAllocationCreateInfo.
853 For more information see [Choosing memory type](@ref choosing_memory_type).
854 
855 Although the library provides convenience functions that create Vulkan buffer or image,
856 allocate memory for it and bind them together,
857 binding of the allocation to a buffer or an image is out of scope of the allocation itself.
858 Allocation object can exist without buffer/image bound,
859 binding can be done manually by the user, and destruction of it can be done
860 independently of destruction of the allocation.
861 
862 The object also remembers its size and some other information.
863 To retrieve this information, use function vmaGetAllocationInfo() and inspect
864 returned structure VmaAllocationInfo.
865 */
866 VK_DEFINE_HANDLE(VmaAllocation)
867 
868 /** \struct VmaDefragmentationContext
869 \brief An opaque object that represents started defragmentation process.
870 
871 Fill structure #VmaDefragmentationInfo and call function vmaBeginDefragmentation() to create it.
872 Call function vmaEndDefragmentation() to destroy it.
873 */
874 VK_DEFINE_HANDLE(VmaDefragmentationContext)
875 
876 /** @} */
877 
878 /**
879 \addtogroup group_virtual
880 @{
881 */
882 
883 /** \struct VmaVirtualAllocation
884 \brief Represents single memory allocation done inside VmaVirtualBlock.
885 
886 Use it as a unique identifier to virtual allocation within the single block.
887 
888 Use value `VK_NULL_HANDLE` to represent a null/invalid allocation.
889 */
890 VK_DEFINE_NON_DISPATCHABLE_HANDLE(VmaVirtualAllocation);
891 
892 /** @} */
893 
894 /**
895 \addtogroup group_virtual
896 @{
897 */
898 
899 /** \struct VmaVirtualBlock
900 \brief Handle to a virtual block object that allows to use core allocation algorithm without allocating any real GPU memory.
901 
902 Fill in #VmaVirtualBlockCreateInfo structure and use vmaCreateVirtualBlock() to create it. Use vmaDestroyVirtualBlock() to destroy it.
903 For more information, see documentation chapter \ref virtual_allocator.
904 
905 This object is not thread-safe - should not be used from multiple threads simultaneously, must be synchronized externally.
906 */
907 VK_DEFINE_HANDLE(VmaVirtualBlock)
908 
909 /** @} */
910 
911 /**
912 \addtogroup group_init
913 @{
914 */
915 
916 /// Callback function called after successful vkAllocateMemory.
917 typedef void (VKAPI_PTR* PFN_vmaAllocateDeviceMemoryFunction)(
918     VmaAllocator VMA_NOT_NULL                    allocator,
919     uint32_t                                     memoryType,
920     VkDeviceMemory VMA_NOT_NULL_NON_DISPATCHABLE memory,
921     VkDeviceSize                                 size,
922     void* VMA_NULLABLE                           pUserData);
923 
924 /// Callback function called before vkFreeMemory.
925 typedef void (VKAPI_PTR* PFN_vmaFreeDeviceMemoryFunction)(
926     VmaAllocator VMA_NOT_NULL                    allocator,
927     uint32_t                                     memoryType,
928     VkDeviceMemory VMA_NOT_NULL_NON_DISPATCHABLE memory,
929     VkDeviceSize                                 size,
930     void* VMA_NULLABLE                           pUserData);
931 
932 /** \brief Set of callbacks that the library will call for `vkAllocateMemory` and `vkFreeMemory`.
933 
934 Provided for informative purpose, e.g. to gather statistics about number of
935 allocations or total amount of memory allocated in Vulkan.
936 
937 Used in VmaAllocatorCreateInfo::pDeviceMemoryCallbacks.
938 */
939 typedef struct VmaDeviceMemoryCallbacks
940 {
941     /// Optional, can be null.
942     PFN_vmaAllocateDeviceMemoryFunction VMA_NULLABLE pfnAllocate;
943     /// Optional, can be null.
944     PFN_vmaFreeDeviceMemoryFunction VMA_NULLABLE pfnFree;
945     /// Optional, can be null.
946     void* VMA_NULLABLE pUserData;
947 } VmaDeviceMemoryCallbacks;
948 
949 /** \brief Pointers to some Vulkan functions - a subset used by the library.
950 
951 Used in VmaAllocatorCreateInfo::pVulkanFunctions.
952 */
953 typedef struct VmaVulkanFunctions
954 {
955     /// Required when using VMA_DYNAMIC_VULKAN_FUNCTIONS.
956     PFN_vkGetInstanceProcAddr VMA_NULLABLE vkGetInstanceProcAddr;
957     /// Required when using VMA_DYNAMIC_VULKAN_FUNCTIONS.
958     PFN_vkGetDeviceProcAddr VMA_NULLABLE vkGetDeviceProcAddr;
959     PFN_vkGetPhysicalDeviceProperties VMA_NULLABLE vkGetPhysicalDeviceProperties;
960     PFN_vkGetPhysicalDeviceMemoryProperties VMA_NULLABLE vkGetPhysicalDeviceMemoryProperties;
961     PFN_vkAllocateMemory VMA_NULLABLE vkAllocateMemory;
962     PFN_vkFreeMemory VMA_NULLABLE vkFreeMemory;
963     PFN_vkMapMemory VMA_NULLABLE vkMapMemory;
964     PFN_vkUnmapMemory VMA_NULLABLE vkUnmapMemory;
965     PFN_vkFlushMappedMemoryRanges VMA_NULLABLE vkFlushMappedMemoryRanges;
966     PFN_vkInvalidateMappedMemoryRanges VMA_NULLABLE vkInvalidateMappedMemoryRanges;
967     PFN_vkBindBufferMemory VMA_NULLABLE vkBindBufferMemory;
968     PFN_vkBindImageMemory VMA_NULLABLE vkBindImageMemory;
969     PFN_vkGetBufferMemoryRequirements VMA_NULLABLE vkGetBufferMemoryRequirements;
970     PFN_vkGetImageMemoryRequirements VMA_NULLABLE vkGetImageMemoryRequirements;
971     PFN_vkCreateBuffer VMA_NULLABLE vkCreateBuffer;
972     PFN_vkDestroyBuffer VMA_NULLABLE vkDestroyBuffer;
973     PFN_vkCreateImage VMA_NULLABLE vkCreateImage;
974     PFN_vkDestroyImage VMA_NULLABLE vkDestroyImage;
975     PFN_vkCmdCopyBuffer VMA_NULLABLE vkCmdCopyBuffer;
976 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
977     /// Fetch "vkGetBufferMemoryRequirements2" on Vulkan >= 1.1, fetch "vkGetBufferMemoryRequirements2KHR" when using VK_KHR_dedicated_allocation extension.
978     PFN_vkGetBufferMemoryRequirements2KHR VMA_NULLABLE vkGetBufferMemoryRequirements2KHR;
979     /// Fetch "vkGetImageMemoryRequirements 2" on Vulkan >= 1.1, fetch "vkGetImageMemoryRequirements2KHR" when using VK_KHR_dedicated_allocation extension.
980     PFN_vkGetImageMemoryRequirements2KHR VMA_NULLABLE vkGetImageMemoryRequirements2KHR;
981 #endif
982 #if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
983     /// Fetch "vkBindBufferMemory2" on Vulkan >= 1.1, fetch "vkBindBufferMemory2KHR" when using VK_KHR_bind_memory2 extension.
984     PFN_vkBindBufferMemory2KHR VMA_NULLABLE vkBindBufferMemory2KHR;
985     /// Fetch "vkBindImageMemory2" on Vulkan >= 1.1, fetch "vkBindImageMemory2KHR" when using VK_KHR_bind_memory2 extension.
986     PFN_vkBindImageMemory2KHR VMA_NULLABLE vkBindImageMemory2KHR;
987 #endif
988 #if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
989     PFN_vkGetPhysicalDeviceMemoryProperties2KHR VMA_NULLABLE vkGetPhysicalDeviceMemoryProperties2KHR;
990 #endif
991 #if VMA_VULKAN_VERSION >= 1003000
992     /// Fetch from "vkGetDeviceBufferMemoryRequirements" on Vulkan >= 1.3, but you can also fetch it from "vkGetDeviceBufferMemoryRequirementsKHR" if you enabled extension VK_KHR_maintenance4.
993     PFN_vkGetDeviceBufferMemoryRequirements VMA_NULLABLE vkGetDeviceBufferMemoryRequirements;
994     /// Fetch from "vkGetDeviceImageMemoryRequirements" on Vulkan >= 1.3, but you can also fetch it from "vkGetDeviceImageMemoryRequirementsKHR" if you enabled extension VK_KHR_maintenance4.
995     PFN_vkGetDeviceImageMemoryRequirements VMA_NULLABLE vkGetDeviceImageMemoryRequirements;
996 #endif
997 } VmaVulkanFunctions;
998 
999 /// Description of a Allocator to be created.
1000 typedef struct VmaAllocatorCreateInfo
1001 {
1002     /// Flags for created allocator. Use #VmaAllocatorCreateFlagBits enum.
1003     VmaAllocatorCreateFlags flags;
1004     /// Vulkan physical device.
1005     /** It must be valid throughout whole lifetime of created allocator. */
1006     VkPhysicalDevice VMA_NOT_NULL physicalDevice;
1007     /// Vulkan device.
1008     /** It must be valid throughout whole lifetime of created allocator. */
1009     VkDevice VMA_NOT_NULL device;
1010     /// Preferred size of a single `VkDeviceMemory` block to be allocated from large heaps > 1 GiB. Optional.
1011     /** Set to 0 to use default, which is currently 256 MiB. */
1012     VkDeviceSize preferredLargeHeapBlockSize;
1013     /// Custom CPU memory allocation callbacks. Optional.
1014     /** Optional, can be null. When specified, will also be used for all CPU-side memory allocations. */
1015     const VkAllocationCallbacks* VMA_NULLABLE pAllocationCallbacks;
1016     /// Informative callbacks for `vkAllocateMemory`, `vkFreeMemory`. Optional.
1017     /** Optional, can be null. */
1018     const VmaDeviceMemoryCallbacks* VMA_NULLABLE pDeviceMemoryCallbacks;
1019     /** \brief Either null or a pointer to an array of limits on maximum number of bytes that can be allocated out of particular Vulkan memory heap.
1020 
1021     If not NULL, it must be a pointer to an array of
1022     `VkPhysicalDeviceMemoryProperties::memoryHeapCount` elements, defining limit on
1023     maximum number of bytes that can be allocated out of particular Vulkan memory
1024     heap.
1025 
1026     Any of the elements may be equal to `VK_WHOLE_SIZE`, which means no limit on that
1027     heap. This is also the default in case of `pHeapSizeLimit` = NULL.
1028 
1029     If there is a limit defined for a heap:
1030 
1031     - If user tries to allocate more memory from that heap using this allocator,
1032       the allocation fails with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
1033     - If the limit is smaller than heap size reported in `VkMemoryHeap::size`, the
1034       value of this limit will be reported instead when using vmaGetMemoryProperties().
1035 
1036     Warning! Using this feature may not be equivalent to installing a GPU with
1037     smaller amount of memory, because graphics driver doesn't necessary fail new
1038     allocations with `VK_ERROR_OUT_OF_DEVICE_MEMORY` result when memory capacity is
1039     exceeded. It may return success and just silently migrate some device memory
1040     blocks to system RAM. This driver behavior can also be controlled using
1041     VK_AMD_memory_overallocation_behavior extension.
1042     */
1043     const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryHeapCount") pHeapSizeLimit;
1044 
1045     /** \brief Pointers to Vulkan functions. Can be null.
1046 
1047     For details see [Pointers to Vulkan functions](@ref config_Vulkan_functions).
1048     */
1049     const VmaVulkanFunctions* VMA_NULLABLE pVulkanFunctions;
1050     /** \brief Handle to Vulkan instance object.
1051 
1052     Starting from version 3.0.0 this member is no longer optional, it must be set!
1053     */
1054     VkInstance VMA_NOT_NULL instance;
1055     /** \brief Optional. The highest version of Vulkan that the application is designed to use.
1056 
1057     It must be a value in the format as created by macro `VK_MAKE_VERSION` or a constant like: `VK_API_VERSION_1_1`, `VK_API_VERSION_1_0`.
1058     The patch version number specified is ignored. Only the major and minor versions are considered.
1059     It must be less or equal (preferably equal) to value as passed to `vkCreateInstance` as `VkApplicationInfo::apiVersion`.
1060     Only versions 1.0, 1.1, 1.2, 1.3 are supported by the current implementation.
1061     Leaving it initialized to zero is equivalent to `VK_API_VERSION_1_0`.
1062     */
1063     uint32_t vulkanApiVersion;
1064 #if VMA_EXTERNAL_MEMORY
1065     /** \brief Either null or a pointer to an array of external memory handle types for each Vulkan memory type.
1066 
1067     If not NULL, it must be a pointer to an array of `VkPhysicalDeviceMemoryProperties::memoryTypeCount`
1068     elements, defining external memory handle types of particular Vulkan memory type,
1069     to be passed using `VkExportMemoryAllocateInfoKHR`.
1070 
1071     Any of the elements may be equal to 0, which means not to use `VkExportMemoryAllocateInfoKHR` on this memory type.
1072     This is also the default in case of `pTypeExternalMemoryHandleTypes` = NULL.
1073     */
1074     const VkExternalMemoryHandleTypeFlagsKHR* VMA_NULLABLE VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryTypeCount") pTypeExternalMemoryHandleTypes;
1075 #endif // #if VMA_EXTERNAL_MEMORY
1076 } VmaAllocatorCreateInfo;
1077 
1078 /// Information about existing #VmaAllocator object.
1079 typedef struct VmaAllocatorInfo
1080 {
1081     /** \brief Handle to Vulkan instance object.
1082 
1083     This is the same value as has been passed through VmaAllocatorCreateInfo::instance.
1084     */
1085     VkInstance VMA_NOT_NULL instance;
1086     /** \brief Handle to Vulkan physical device object.
1087 
1088     This is the same value as has been passed through VmaAllocatorCreateInfo::physicalDevice.
1089     */
1090     VkPhysicalDevice VMA_NOT_NULL physicalDevice;
1091     /** \brief Handle to Vulkan device object.
1092 
1093     This is the same value as has been passed through VmaAllocatorCreateInfo::device.
1094     */
1095     VkDevice VMA_NOT_NULL device;
1096 } VmaAllocatorInfo;
1097 
1098 /** @} */
1099 
1100 /**
1101 \addtogroup group_stats
1102 @{
1103 */
1104 
1105 /** \brief Calculated statistics of memory usage e.g. in a specific memory type, heap, custom pool, or total.
1106 
1107 These are fast to calculate.
1108 See functions: vmaGetHeapBudgets(), vmaGetPoolStatistics().
1109 */
1110 typedef struct VmaStatistics
1111 {
1112     /** \brief Number of `VkDeviceMemory` objects - Vulkan memory blocks allocated.
1113     */
1114     uint32_t blockCount;
1115     /** \brief Number of #VmaAllocation objects allocated.
1116 
1117     Dedicated allocations have their own blocks, so each one adds 1 to `allocationCount` as well as `blockCount`.
1118     */
1119     uint32_t allocationCount;
1120     /** \brief Number of bytes allocated in `VkDeviceMemory` blocks.
1121 
1122     \note To avoid confusion, please be aware that what Vulkan calls an "allocation" - a whole `VkDeviceMemory` object
1123     (e.g. as in `VkPhysicalDeviceLimits::maxMemoryAllocationCount`) is called a "block" in VMA, while VMA calls
1124     "allocation" a #VmaAllocation object that represents a memory region sub-allocated from such block, usually for a single buffer or image.
1125     */
1126     VkDeviceSize blockBytes;
1127     /** \brief Total number of bytes occupied by all #VmaAllocation objects.
1128 
1129     Always less or equal than `blockBytes`.
1130     Difference `(blockBytes - allocationBytes)` is the amount of memory allocated from Vulkan
1131     but unused by any #VmaAllocation.
1132     */
1133     VkDeviceSize allocationBytes;
1134 } VmaStatistics;
1135 
1136 /** \brief More detailed statistics than #VmaStatistics.
1137 
1138 These are slower to calculate. Use for debugging purposes.
1139 See functions: vmaCalculateStatistics(), vmaCalculatePoolStatistics().
1140 
1141 Previous version of the statistics API provided averages, but they have been removed
1142 because they can be easily calculated as:
1143 
1144 \code
1145 VkDeviceSize allocationSizeAvg = detailedStats.statistics.allocationBytes / detailedStats.statistics.allocationCount;
1146 VkDeviceSize unusedBytes = detailedStats.statistics.blockBytes - detailedStats.statistics.allocationBytes;
1147 VkDeviceSize unusedRangeSizeAvg = unusedBytes / detailedStats.unusedRangeCount;
1148 \endcode
1149 */
1150 typedef struct VmaDetailedStatistics
1151 {
1152     /// Basic statistics.
1153     VmaStatistics statistics;
1154     /// Number of free ranges of memory between allocations.
1155     uint32_t unusedRangeCount;
1156     /// Smallest allocation size. `VK_WHOLE_SIZE` if there are 0 allocations.
1157     VkDeviceSize allocationSizeMin;
1158     /// Largest allocation size. 0 if there are 0 allocations.
1159     VkDeviceSize allocationSizeMax;
1160     /// Smallest empty range size. `VK_WHOLE_SIZE` if there are 0 empty ranges.
1161     VkDeviceSize unusedRangeSizeMin;
1162     /// Largest empty range size. 0 if there are 0 empty ranges.
1163     VkDeviceSize unusedRangeSizeMax;
1164 } VmaDetailedStatistics;
1165 
1166 /** \brief  General statistics from current state of the Allocator -
1167 total memory usage across all memory heaps and types.
1168 
1169 These are slower to calculate. Use for debugging purposes.
1170 See function vmaCalculateStatistics().
1171 */
1172 typedef struct VmaTotalStatistics
1173 {
1174     VmaDetailedStatistics memoryType[VK_MAX_MEMORY_TYPES];
1175     VmaDetailedStatistics memoryHeap[VK_MAX_MEMORY_HEAPS];
1176     VmaDetailedStatistics total;
1177 } VmaTotalStatistics;
1178 
1179 /** \brief Statistics of current memory usage and available budget for a specific memory heap.
1180 
1181 These are fast to calculate.
1182 See function vmaGetHeapBudgets().
1183 */
1184 typedef struct VmaBudget
1185 {
1186     /** \brief Statistics fetched from the library.
1187     */
1188     VmaStatistics statistics;
1189     /** \brief Estimated current memory usage of the program, in bytes.
1190 
1191     Fetched from system using VK_EXT_memory_budget extension if enabled.
1192 
1193     It might be different than `statistics.blockBytes` (usually higher) due to additional implicit objects
1194     also occupying the memory, like swapchain, pipelines, descriptor heaps, command buffers, or
1195     `VkDeviceMemory` blocks allocated outside of this library, if any.
1196     */
1197     VkDeviceSize usage;
1198     /** \brief Estimated amount of memory available to the program, in bytes.
1199 
1200     Fetched from system using VK_EXT_memory_budget extension if enabled.
1201 
1202     It might be different (most probably smaller) than `VkMemoryHeap::size[heapIndex]` due to factors
1203     external to the program, decided by the operating system.
1204     Difference `budget - usage` is the amount of additional memory that can probably
1205     be allocated without problems. Exceeding the budget may result in various problems.
1206     */
1207     VkDeviceSize budget;
1208 } VmaBudget;
1209 
1210 /** @} */
1211 
1212 /**
1213 \addtogroup group_alloc
1214 @{
1215 */
1216 
1217 /** \brief Parameters of new #VmaAllocation.
1218 
1219 To be used with functions like vmaCreateBuffer(), vmaCreateImage(), and many others.
1220 */
1221 typedef struct VmaAllocationCreateInfo
1222 {
1223     /// Use #VmaAllocationCreateFlagBits enum.
1224     VmaAllocationCreateFlags flags;
1225     /** \brief Intended usage of memory.
1226 
1227     You can leave #VMA_MEMORY_USAGE_UNKNOWN if you specify memory requirements in other way. \n
1228     If `pool` is not null, this member is ignored.
1229     */
1230     VmaMemoryUsage usage;
1231     /** \brief Flags that must be set in a Memory Type chosen for an allocation.
1232 
1233     Leave 0 if you specify memory requirements in other way. \n
1234     If `pool` is not null, this member is ignored.*/
1235     VkMemoryPropertyFlags requiredFlags;
1236     /** \brief Flags that preferably should be set in a memory type chosen for an allocation.
1237 
1238     Set to 0 if no additional flags are preferred. \n
1239     If `pool` is not null, this member is ignored. */
1240     VkMemoryPropertyFlags preferredFlags;
1241     /** \brief Bitmask containing one bit set for every memory type acceptable for this allocation.
1242 
1243     Value 0 is equivalent to `UINT32_MAX` - it means any memory type is accepted if
1244     it meets other requirements specified by this structure, with no further
1245     restrictions on memory type index. \n
1246     If `pool` is not null, this member is ignored.
1247     */
1248     uint32_t memoryTypeBits;
1249     /** \brief Pool that this allocation should be created in.
1250 
1251     Leave `VK_NULL_HANDLE` to allocate from default pool. If not null, members:
1252     `usage`, `requiredFlags`, `preferredFlags`, `memoryTypeBits` are ignored.
1253     */
1254     VmaPool VMA_NULLABLE pool;
1255     /** \brief Custom general-purpose pointer that will be stored in #VmaAllocation, can be read as VmaAllocationInfo::pUserData and changed using vmaSetAllocationUserData().
1256 
1257     If #VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT is used, it must be either
1258     null or pointer to a null-terminated string. The string will be then copied to
1259     internal buffer, so it doesn't need to be valid after allocation call.
1260     */
1261     void* VMA_NULLABLE pUserData;
1262     /** \brief A floating-point value between 0 and 1, indicating the priority of the allocation relative to other memory allocations.
1263 
1264     It is used only when #VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT flag was used during creation of the #VmaAllocator object
1265     and this allocation ends up as dedicated or is explicitly forced as dedicated using #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
1266     Otherwise, it has the priority of a memory block where it is placed and this variable is ignored.
1267     */
1268     float priority;
1269 } VmaAllocationCreateInfo;
1270 
1271 /// Describes parameter of created #VmaPool.
1272 typedef struct VmaPoolCreateInfo
1273 {
1274     /** \brief Vulkan memory type index to allocate this pool from.
1275     */
1276     uint32_t memoryTypeIndex;
1277     /** \brief Use combination of #VmaPoolCreateFlagBits.
1278     */
1279     VmaPoolCreateFlags flags;
1280     /** \brief Size of a single `VkDeviceMemory` block to be allocated as part of this pool, in bytes. Optional.
1281 
1282     Specify nonzero to set explicit, constant size of memory blocks used by this
1283     pool.
1284 
1285     Leave 0 to use default and let the library manage block sizes automatically.
1286     Sizes of particular blocks may vary.
1287     In this case, the pool will also support dedicated allocations.
1288     */
1289     VkDeviceSize blockSize;
1290     /** \brief Minimum number of blocks to be always allocated in this pool, even if they stay empty.
1291 
1292     Set to 0 to have no preallocated blocks and allow the pool be completely empty.
1293     */
1294     size_t minBlockCount;
1295     /** \brief Maximum number of blocks that can be allocated in this pool. Optional.
1296 
1297     Set to 0 to use default, which is `SIZE_MAX`, which means no limit.
1298 
1299     Set to same value as VmaPoolCreateInfo::minBlockCount to have fixed amount of memory allocated
1300     throughout whole lifetime of this pool.
1301     */
1302     size_t maxBlockCount;
1303     /** \brief A floating-point value between 0 and 1, indicating the priority of the allocations in this pool relative to other memory allocations.
1304 
1305     It is used only when #VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT flag was used during creation of the #VmaAllocator object.
1306     Otherwise, this variable is ignored.
1307     */
1308     float priority;
1309     /** \brief Additional minimum alignment to be used for all allocations created from this pool. Can be 0.
1310 
1311     Leave 0 (default) not to impose any additional alignment. If not 0, it must be a power of two.
1312     It can be useful in cases where alignment returned by Vulkan by functions like `vkGetBufferMemoryRequirements` is not enough,
1313     e.g. when doing interop with OpenGL.
1314     */
1315     VkDeviceSize minAllocationAlignment;
1316     /** \brief Additional `pNext` chain to be attached to `VkMemoryAllocateInfo` used for every allocation made by this pool. Optional.
1317 
1318     Optional, can be null. If not null, it must point to a `pNext` chain of structures that can be attached to `VkMemoryAllocateInfo`.
1319     It can be useful for special needs such as adding `VkExportMemoryAllocateInfoKHR`.
1320     Structures pointed by this member must remain alive and unchanged for the whole lifetime of the custom pool.
1321 
1322     Please note that some structures, e.g. `VkMemoryPriorityAllocateInfoEXT`, `VkMemoryDedicatedAllocateInfoKHR`,
1323     can be attached automatically by this library when using other, more convenient of its features.
1324     */
1325     void* VMA_NULLABLE pMemoryAllocateNext;
1326 } VmaPoolCreateInfo;
1327 
1328 /** @} */
1329 
1330 /**
1331 \addtogroup group_alloc
1332 @{
1333 */
1334 
1335 /// Parameters of #VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
1336 typedef struct VmaAllocationInfo
1337 {
1338     /** \brief Memory type index that this allocation was allocated from.
1339 
1340     It never changes.
1341     */
1342     uint32_t memoryType;
1343     /** \brief Handle to Vulkan memory object.
1344 
1345     Same memory object can be shared by multiple allocations.
1346 
1347     It can change after the allocation is moved during \ref defragmentation.
1348     */
1349     VkDeviceMemory VMA_NULLABLE_NON_DISPATCHABLE deviceMemory;
1350     /** \brief Offset in `VkDeviceMemory` object to the beginning of this allocation, in bytes. `(deviceMemory, offset)` pair is unique to this allocation.
1351 
1352     You usually don't need to use this offset. If you create a buffer or an image together with the allocation using e.g. function
1353     vmaCreateBuffer(), vmaCreateImage(), functions that operate on these resources refer to the beginning of the buffer or image,
1354     not entire device memory block. Functions like vmaMapMemory(), vmaBindBufferMemory() also refer to the beginning of the allocation
1355     and apply this offset automatically.
1356 
1357     It can change after the allocation is moved during \ref defragmentation.
1358     */
1359     VkDeviceSize offset;
1360     /** \brief Size of this allocation, in bytes.
1361 
1362     It never changes.
1363 
1364     \note Allocation size returned in this variable may be greater than the size
1365     requested for the resource e.g. as `VkBufferCreateInfo::size`. Whole size of the
1366     allocation is accessible for operations on memory e.g. using a pointer after
1367     mapping with vmaMapMemory(), but operations on the resource e.g. using
1368     `vkCmdCopyBuffer` must be limited to the size of the resource.
1369     */
1370     VkDeviceSize size;
1371     /** \brief Pointer to the beginning of this allocation as mapped data.
1372 
1373     If the allocation hasn't been mapped using vmaMapMemory() and hasn't been
1374     created with #VMA_ALLOCATION_CREATE_MAPPED_BIT flag, this value is null.
1375 
1376     It can change after call to vmaMapMemory(), vmaUnmapMemory().
1377     It can also change after the allocation is moved during \ref defragmentation.
1378     */
1379     void* VMA_NULLABLE pMappedData;
1380     /** \brief Custom general-purpose pointer that was passed as VmaAllocationCreateInfo::pUserData or set using vmaSetAllocationUserData().
1381 
1382     It can change after call to vmaSetAllocationUserData() for this allocation.
1383     */
1384     void* VMA_NULLABLE pUserData;
1385     /** \brief Custom allocation name that was set with vmaSetAllocationName().
1386 
1387     It can change after call to vmaSetAllocationName() for this allocation.
1388 
1389     Another way to set custom name is to pass it in VmaAllocationCreateInfo::pUserData with
1390     additional flag #VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT set [DEPRECATED].
1391     */
1392     const char* VMA_NULLABLE pName;
1393 } VmaAllocationInfo;
1394 
1395 /** \brief Parameters for defragmentation.
1396 
1397 To be used with function vmaBeginDefragmentation().
1398 */
1399 typedef struct VmaDefragmentationInfo
1400 {
1401     /// \brief Use combination of #VmaDefragmentationFlagBits.
1402     VmaDefragmentationFlags flags;
1403     /** \brief Custom pool to be defragmented.
1404 
1405     If null then default pools will undergo defragmentation process.
1406     */
1407     VmaPool VMA_NULLABLE pool;
1408     /** \brief Maximum numbers of bytes that can be copied during single pass, while moving allocations to different places.
1409 
1410     `0` means no limit.
1411     */
1412     VkDeviceSize maxBytesPerPass;
1413     /** \brief Maximum number of allocations that can be moved during single pass to a different place.
1414 
1415     `0` means no limit.
1416     */
1417     uint32_t maxAllocationsPerPass;
1418 } VmaDefragmentationInfo;
1419 
1420 /// Single move of an allocation to be done for defragmentation.
1421 typedef struct VmaDefragmentationMove
1422 {
1423     /// Operation to be performed on the allocation by vmaEndDefragmentationPass(). Default value is #VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY. You can modify it.
1424     VmaDefragmentationMoveOperation operation;
1425     /// Allocation that should be moved.
1426     VmaAllocation VMA_NOT_NULL srcAllocation;
1427     /** \brief Temporary allocation pointing to destination memory that will replace `srcAllocation`.
1428 
1429     \warning Do not store this allocation in your data structures! It exists only temporarily, for the duration of the defragmentation pass,
1430     to be used for binding new buffer/image to the destination memory using e.g. vmaBindBufferMemory().
1431     vmaEndDefragmentationPass() will destroy it and make `srcAllocation` point to this memory.
1432     */
1433     VmaAllocation VMA_NOT_NULL dstTmpAllocation;
1434 } VmaDefragmentationMove;
1435 
1436 /** \brief Parameters for incremental defragmentation steps.
1437 
1438 To be used with function vmaBeginDefragmentationPass().
1439 */
1440 typedef struct VmaDefragmentationPassMoveInfo
1441 {
1442     /// Number of elements in the `pMoves` array.
1443     uint32_t moveCount;
1444     /** \brief Array of moves to be performed by the user in the current defragmentation pass.
1445 
1446     Pointer to an array of `moveCount` elements, owned by VMA, created in vmaBeginDefragmentationPass(), destroyed in vmaEndDefragmentationPass().
1447 
1448     For each element, you should:
1449 
1450     1. Create a new buffer/image in the place pointed by VmaDefragmentationMove::dstMemory + VmaDefragmentationMove::dstOffset.
1451     2. Copy data from the VmaDefragmentationMove::srcAllocation e.g. using `vkCmdCopyBuffer`, `vkCmdCopyImage`.
1452     3. Make sure these commands finished executing on the GPU.
1453     4. Destroy the old buffer/image.
1454 
1455     Only then you can finish defragmentation pass by calling vmaEndDefragmentationPass().
1456     After this call, the allocation will point to the new place in memory.
1457 
1458     Alternatively, if you cannot move specific allocation, you can set VmaDefragmentationMove::operation to #VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE.
1459 
1460     Alternatively, if you decide you want to completely remove the allocation:
1461 
1462     1. Destroy its buffer/image.
1463     2. Set VmaDefragmentationMove::operation to #VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY.
1464 
1465     Then, after vmaEndDefragmentationPass() the allocation will be freed.
1466     */
1467     VmaDefragmentationMove* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(moveCount) pMoves;
1468 } VmaDefragmentationPassMoveInfo;
1469 
1470 /// Statistics returned for defragmentation process in function vmaEndDefragmentation().
1471 typedef struct VmaDefragmentationStats
1472 {
1473     /// Total number of bytes that have been copied while moving allocations to different places.
1474     VkDeviceSize bytesMoved;
1475     /// Total number of bytes that have been released to the system by freeing empty `VkDeviceMemory` objects.
1476     VkDeviceSize bytesFreed;
1477     /// Number of allocations that have been moved to different places.
1478     uint32_t allocationsMoved;
1479     /// Number of empty `VkDeviceMemory` objects that have been released to the system.
1480     uint32_t deviceMemoryBlocksFreed;
1481 } VmaDefragmentationStats;
1482 
1483 /** @} */
1484 
1485 /**
1486 \addtogroup group_virtual
1487 @{
1488 */
1489 
1490 /// Parameters of created #VmaVirtualBlock object to be passed to vmaCreateVirtualBlock().
1491 typedef struct VmaVirtualBlockCreateInfo
1492 {
1493     /** \brief Total size of the virtual block.
1494 
1495     Sizes can be expressed in bytes or any units you want as long as you are consistent in using them.
1496     For example, if you allocate from some array of structures, 1 can mean single instance of entire structure.
1497     */
1498     VkDeviceSize size;
1499 
1500     /** \brief Use combination of #VmaVirtualBlockCreateFlagBits.
1501     */
1502     VmaVirtualBlockCreateFlags flags;
1503 
1504     /** \brief Custom CPU memory allocation callbacks. Optional.
1505 
1506     Optional, can be null. When specified, they will be used for all CPU-side memory allocations.
1507     */
1508     const VkAllocationCallbacks* VMA_NULLABLE pAllocationCallbacks;
1509 } VmaVirtualBlockCreateInfo;
1510 
1511 /// Parameters of created virtual allocation to be passed to vmaVirtualAllocate().
1512 typedef struct VmaVirtualAllocationCreateInfo
1513 {
1514     /** \brief Size of the allocation.
1515 
1516     Cannot be zero.
1517     */
1518     VkDeviceSize size;
1519     /** \brief Required alignment of the allocation. Optional.
1520 
1521     Must be power of two. Special value 0 has the same meaning as 1 - means no special alignment is required, so allocation can start at any offset.
1522     */
1523     VkDeviceSize alignment;
1524     /** \brief Use combination of #VmaVirtualAllocationCreateFlagBits.
1525     */
1526     VmaVirtualAllocationCreateFlags flags;
1527     /** \brief Custom pointer to be associated with the allocation. Optional.
1528 
1529     It can be any value and can be used for user-defined purposes. It can be fetched or changed later.
1530     */
1531     void* VMA_NULLABLE pUserData;
1532 } VmaVirtualAllocationCreateInfo;
1533 
1534 /// Parameters of an existing virtual allocation, returned by vmaGetVirtualAllocationInfo().
1535 typedef struct VmaVirtualAllocationInfo
1536 {
1537     /** \brief Offset of the allocation.
1538 
1539     Offset at which the allocation was made.
1540     */
1541     VkDeviceSize offset;
1542     /** \brief Size of the allocation.
1543 
1544     Same value as passed in VmaVirtualAllocationCreateInfo::size.
1545     */
1546     VkDeviceSize size;
1547     /** \brief Custom pointer associated with the allocation.
1548 
1549     Same value as passed in VmaVirtualAllocationCreateInfo::pUserData or to vmaSetVirtualAllocationUserData().
1550     */
1551     void* VMA_NULLABLE pUserData;
1552 } VmaVirtualAllocationInfo;
1553 
1554 /** @} */
1555 
1556 #endif // _VMA_DATA_TYPES_DECLARATIONS
1557 
1558 #ifndef _VMA_FUNCTION_HEADERS
1559 
1560 /**
1561 \addtogroup group_init
1562 @{
1563 */
1564 
1565 /// Creates #VmaAllocator object.
1566 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAllocator(
1567     const VmaAllocatorCreateInfo* VMA_NOT_NULL pCreateInfo,
1568     VmaAllocator VMA_NULLABLE* VMA_NOT_NULL pAllocator);
1569 
1570 /// Destroys allocator object.
1571 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyAllocator(
1572     VmaAllocator VMA_NULLABLE allocator);
1573 
1574 /** \brief Returns information about existing #VmaAllocator object - handle to Vulkan device etc.
1575 
1576 It might be useful if you want to keep just the #VmaAllocator handle and fetch other required handles to
1577 `VkPhysicalDevice`, `VkDevice` etc. every time using this function.
1578 */
1579 VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocatorInfo(
1580     VmaAllocator VMA_NOT_NULL allocator,
1581     VmaAllocatorInfo* VMA_NOT_NULL pAllocatorInfo);
1582 
1583 /**
1584 PhysicalDeviceProperties are fetched from physicalDevice by the allocator.
1585 You can access it here, without fetching it again on your own.
1586 */
1587 VMA_CALL_PRE void VMA_CALL_POST vmaGetPhysicalDeviceProperties(
1588     VmaAllocator VMA_NOT_NULL allocator,
1589     const VkPhysicalDeviceProperties* VMA_NULLABLE* VMA_NOT_NULL ppPhysicalDeviceProperties);
1590 
1591 /**
1592 PhysicalDeviceMemoryProperties are fetched from physicalDevice by the allocator.
1593 You can access it here, without fetching it again on your own.
1594 */
1595 VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryProperties(
1596     VmaAllocator VMA_NOT_NULL allocator,
1597     const VkPhysicalDeviceMemoryProperties* VMA_NULLABLE* VMA_NOT_NULL ppPhysicalDeviceMemoryProperties);
1598 
1599 /**
1600 \brief Given Memory Type Index, returns Property Flags of this memory type.
1601 
1602 This is just a convenience function. Same information can be obtained using
1603 vmaGetMemoryProperties().
1604 */
1605 VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryTypeProperties(
1606     VmaAllocator VMA_NOT_NULL allocator,
1607     uint32_t memoryTypeIndex,
1608     VkMemoryPropertyFlags* VMA_NOT_NULL pFlags);
1609 
1610 /** \brief Sets index of the current frame.
1611 */
1612 VMA_CALL_PRE void VMA_CALL_POST vmaSetCurrentFrameIndex(
1613     VmaAllocator VMA_NOT_NULL allocator,
1614     uint32_t frameIndex);
1615 
1616 /** @} */
1617 
1618 /**
1619 \addtogroup group_stats
1620 @{
1621 */
1622 
1623 /** \brief Retrieves statistics from current state of the Allocator.
1624 
1625 This function is called "calculate" not "get" because it has to traverse all
1626 internal data structures, so it may be quite slow. Use it for debugging purposes.
1627 For faster but more brief statistics suitable to be called every frame or every allocation,
1628 use vmaGetHeapBudgets().
1629 
1630 Note that when using allocator from multiple threads, returned information may immediately
1631 become outdated.
1632 */
1633 VMA_CALL_PRE void VMA_CALL_POST vmaCalculateStatistics(
1634     VmaAllocator VMA_NOT_NULL allocator,
1635     VmaTotalStatistics* VMA_NOT_NULL pStats);
1636 
1637 /** \brief Retrieves information about current memory usage and budget for all memory heaps.
1638 
1639 \param allocator
1640 \param[out] pBudgets Must point to array with number of elements at least equal to number of memory heaps in physical device used.
1641 
1642 This function is called "get" not "calculate" because it is very fast, suitable to be called
1643 every frame or every allocation. For more detailed statistics use vmaCalculateStatistics().
1644 
1645 Note that when using allocator from multiple threads, returned information may immediately
1646 become outdated.
1647 */
1648 VMA_CALL_PRE void VMA_CALL_POST vmaGetHeapBudgets(
1649     VmaAllocator VMA_NOT_NULL allocator,
1650     VmaBudget* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryHeapCount") pBudgets);
1651 
1652 /** @} */
1653 
1654 /**
1655 \addtogroup group_alloc
1656 @{
1657 */
1658 
1659 /**
1660 \brief Helps to find memoryTypeIndex, given memoryTypeBits and VmaAllocationCreateInfo.
1661 
1662 This algorithm tries to find a memory type that:
1663 
1664 - Is allowed by memoryTypeBits.
1665 - Contains all the flags from pAllocationCreateInfo->requiredFlags.
1666 - Matches intended usage.
1667 - Has as many flags from pAllocationCreateInfo->preferredFlags as possible.
1668 
1669 \return Returns VK_ERROR_FEATURE_NOT_PRESENT if not found. Receiving such result
1670 from this function or any other allocating function probably means that your
1671 device doesn't support any memory type with requested features for the specific
1672 type of resource you want to use it for. Please check parameters of your
1673 resource, like image layout (OPTIMAL versus LINEAR) or mip level count.
1674 */
1675 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndex(
1676     VmaAllocator VMA_NOT_NULL allocator,
1677     uint32_t memoryTypeBits,
1678     const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
1679     uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
1680 
1681 /**
1682 \brief Helps to find memoryTypeIndex, given VkBufferCreateInfo and VmaAllocationCreateInfo.
1683 
1684 It can be useful e.g. to determine value to be used as VmaPoolCreateInfo::memoryTypeIndex.
1685 It internally creates a temporary, dummy buffer that never has memory bound.
1686 */
1687 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForBufferInfo(
1688     VmaAllocator VMA_NOT_NULL allocator,
1689     const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
1690     const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
1691     uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
1692 
1693 /**
1694 \brief Helps to find memoryTypeIndex, given VkImageCreateInfo and VmaAllocationCreateInfo.
1695 
1696 It can be useful e.g. to determine value to be used as VmaPoolCreateInfo::memoryTypeIndex.
1697 It internally creates a temporary, dummy image that never has memory bound.
1698 */
1699 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForImageInfo(
1700     VmaAllocator VMA_NOT_NULL allocator,
1701     const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
1702     const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
1703     uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
1704 
1705 /** \brief Allocates Vulkan device memory and creates #VmaPool object.
1706 
1707 \param allocator Allocator object.
1708 \param pCreateInfo Parameters of pool to create.
1709 \param[out] pPool Handle to created pool.
1710 */
1711 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreatePool(
1712     VmaAllocator VMA_NOT_NULL allocator,
1713     const VmaPoolCreateInfo* VMA_NOT_NULL pCreateInfo,
1714     VmaPool VMA_NULLABLE* VMA_NOT_NULL pPool);
1715 
1716 /** \brief Destroys #VmaPool object and frees Vulkan device memory.
1717 */
1718 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyPool(
1719     VmaAllocator VMA_NOT_NULL allocator,
1720     VmaPool VMA_NULLABLE pool);
1721 
1722 /** @} */
1723 
1724 /**
1725 \addtogroup group_stats
1726 @{
1727 */
1728 
1729 /** \brief Retrieves statistics of existing #VmaPool object.
1730 
1731 \param allocator Allocator object.
1732 \param pool Pool object.
1733 \param[out] pPoolStats Statistics of specified pool.
1734 */
1735 VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolStatistics(
1736     VmaAllocator VMA_NOT_NULL allocator,
1737     VmaPool VMA_NOT_NULL pool,
1738     VmaStatistics* VMA_NOT_NULL pPoolStats);
1739 
1740 /** \brief Retrieves detailed statistics of existing #VmaPool object.
1741 
1742 \param allocator Allocator object.
1743 \param pool Pool object.
1744 \param[out] pPoolStats Statistics of specified pool.
1745 */
1746 VMA_CALL_PRE void VMA_CALL_POST vmaCalculatePoolStatistics(
1747     VmaAllocator VMA_NOT_NULL allocator,
1748     VmaPool VMA_NOT_NULL pool,
1749     VmaDetailedStatistics* VMA_NOT_NULL pPoolStats);
1750 
1751 /** @} */
1752 
1753 /**
1754 \addtogroup group_alloc
1755 @{
1756 */
1757 
1758 /** \brief Checks magic number in margins around all allocations in given memory pool in search for corruptions.
1759 
1760 Corruption detection is enabled only when `VMA_DEBUG_DETECT_CORRUPTION` macro is defined to nonzero,
1761 `VMA_DEBUG_MARGIN` is defined to nonzero and the pool is created in memory type that is
1762 `HOST_VISIBLE` and `HOST_COHERENT`. For more information, see [Corruption detection](@ref debugging_memory_usage_corruption_detection).
1763 
1764 Possible return values:
1765 
1766 - `VK_ERROR_FEATURE_NOT_PRESENT` - corruption detection is not enabled for specified pool.
1767 - `VK_SUCCESS` - corruption detection has been performed and succeeded.
1768 - `VK_ERROR_UNKNOWN` - corruption detection has been performed and found memory corruptions around one of the allocations.
1769   `VMA_ASSERT` is also fired in that case.
1770 - Other value: Error returned by Vulkan, e.g. memory mapping failure.
1771 */
1772 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckPoolCorruption(
1773     VmaAllocator VMA_NOT_NULL allocator,
1774     VmaPool VMA_NOT_NULL pool);
1775 
1776 /** \brief Retrieves name of a custom pool.
1777 
1778 After the call `ppName` is either null or points to an internally-owned null-terminated string
1779 containing name of the pool that was previously set. The pointer becomes invalid when the pool is
1780 destroyed or its name is changed using vmaSetPoolName().
1781 */
1782 VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolName(
1783     VmaAllocator VMA_NOT_NULL allocator,
1784     VmaPool VMA_NOT_NULL pool,
1785     const char* VMA_NULLABLE* VMA_NOT_NULL ppName);
1786 
1787 /** \brief Sets name of a custom pool.
1788 
1789 `pName` can be either null or pointer to a null-terminated string with new name for the pool.
1790 Function makes internal copy of the string, so it can be changed or freed immediately after this call.
1791 */
1792 VMA_CALL_PRE void VMA_CALL_POST vmaSetPoolName(
1793     VmaAllocator VMA_NOT_NULL allocator,
1794     VmaPool VMA_NOT_NULL pool,
1795     const char* VMA_NULLABLE pName);
1796 
1797 /** \brief General purpose memory allocation.
1798 
1799 \param allocator
1800 \param pVkMemoryRequirements
1801 \param pCreateInfo
1802 \param[out] pAllocation Handle to allocated memory.
1803 \param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
1804 
1805 You should free the memory using vmaFreeMemory() or vmaFreeMemoryPages().
1806 
1807 It is recommended to use vmaAllocateMemoryForBuffer(), vmaAllocateMemoryForImage(),
1808 vmaCreateBuffer(), vmaCreateImage() instead whenever possible.
1809 */
1810 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemory(
1811     VmaAllocator VMA_NOT_NULL allocator,
1812     const VkMemoryRequirements* VMA_NOT_NULL pVkMemoryRequirements,
1813     const VmaAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
1814     VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
1815     VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
1816 
1817 /** \brief General purpose memory allocation for multiple allocation objects at once.
1818 
1819 \param allocator Allocator object.
1820 \param pVkMemoryRequirements Memory requirements for each allocation.
1821 \param pCreateInfo Creation parameters for each allocation.
1822 \param allocationCount Number of allocations to make.
1823 \param[out] pAllocations Pointer to array that will be filled with handles to created allocations.
1824 \param[out] pAllocationInfo Optional. Pointer to array that will be filled with parameters of created allocations.
1825 
1826 You should free the memory using vmaFreeMemory() or vmaFreeMemoryPages().
1827 
1828 Word "pages" is just a suggestion to use this function to allocate pieces of memory needed for sparse binding.
1829 It is just a general purpose allocation function able to make multiple allocations at once.
1830 It may be internally optimized to be more efficient than calling vmaAllocateMemory() `allocationCount` times.
1831 
1832 All allocations are made using same parameters. All of them are created out of the same memory pool and type.
1833 If any allocation fails, all allocations already made within this function call are also freed, so that when
1834 returned result is not `VK_SUCCESS`, `pAllocation` array is always entirely filled with `VK_NULL_HANDLE`.
1835 */
1836 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryPages(
1837     VmaAllocator VMA_NOT_NULL allocator,
1838     const VkMemoryRequirements* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pVkMemoryRequirements,
1839     const VmaAllocationCreateInfo* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pCreateInfo,
1840     size_t allocationCount,
1841     VmaAllocation VMA_NULLABLE* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pAllocations,
1842     VmaAllocationInfo* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) pAllocationInfo);
1843 
1844 /** \brief Allocates memory suitable for given `VkBuffer`.
1845 
1846 \param allocator
1847 \param buffer
1848 \param pCreateInfo
1849 \param[out] pAllocation Handle to allocated memory.
1850 \param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
1851 
1852 It only creates #VmaAllocation. To bind the memory to the buffer, use vmaBindBufferMemory().
1853 
1854 This is a special-purpose function. In most cases you should use vmaCreateBuffer().
1855 
1856 You must free the allocation using vmaFreeMemory() when no longer needed.
1857 */
1858 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForBuffer(
1859     VmaAllocator VMA_NOT_NULL allocator,
1860     VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer,
1861     const VmaAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
1862     VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
1863     VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
1864 
1865 /** \brief Allocates memory suitable for given `VkImage`.
1866 
1867 \param allocator
1868 \param image
1869 \param pCreateInfo
1870 \param[out] pAllocation Handle to allocated memory.
1871 \param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
1872 
1873 It only creates #VmaAllocation. To bind the memory to the buffer, use vmaBindImageMemory().
1874 
1875 This is a special-purpose function. In most cases you should use vmaCreateImage().
1876 
1877 You must free the allocation using vmaFreeMemory() when no longer needed.
1878 */
1879 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForImage(
1880     VmaAllocator VMA_NOT_NULL allocator,
1881     VkImage VMA_NOT_NULL_NON_DISPATCHABLE image,
1882     const VmaAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
1883     VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
1884     VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
1885 
1886 /** \brief Frees memory previously allocated using vmaAllocateMemory(), vmaAllocateMemoryForBuffer(), or vmaAllocateMemoryForImage().
1887 
1888 Passing `VK_NULL_HANDLE` as `allocation` is valid. Such function call is just skipped.
1889 */
1890 VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemory(
1891     VmaAllocator VMA_NOT_NULL allocator,
1892     const VmaAllocation VMA_NULLABLE allocation);
1893 
1894 /** \brief Frees memory and destroys multiple allocations.
1895 
1896 Word "pages" is just a suggestion to use this function to free pieces of memory used for sparse binding.
1897 It is just a general purpose function to free memory and destroy allocations made using e.g. vmaAllocateMemory(),
1898 vmaAllocateMemoryPages() and other functions.
1899 It may be internally optimized to be more efficient than calling vmaFreeMemory() `allocationCount` times.
1900 
1901 Allocations in `pAllocations` array can come from any memory pools and types.
1902 Passing `VK_NULL_HANDLE` as elements of `pAllocations` array is valid. Such entries are just skipped.
1903 */
1904 VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemoryPages(
1905     VmaAllocator VMA_NOT_NULL allocator,
1906     size_t allocationCount,
1907     const VmaAllocation VMA_NULLABLE* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pAllocations);
1908 
1909 /** \brief Returns current information about specified allocation.
1910 
1911 Current paramteres of given allocation are returned in `pAllocationInfo`.
1912 
1913 Although this function doesn't lock any mutex, so it should be quite efficient,
1914 you should avoid calling it too often.
1915 You can retrieve same VmaAllocationInfo structure while creating your resource, from function
1916 vmaCreateBuffer(), vmaCreateImage(). You can remember it if you are sure parameters don't change
1917 (e.g. due to defragmentation).
1918 */
1919 VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationInfo(
1920     VmaAllocator VMA_NOT_NULL allocator,
1921     VmaAllocation VMA_NOT_NULL allocation,
1922     VmaAllocationInfo* VMA_NOT_NULL pAllocationInfo);
1923 
1924 /** \brief Sets pUserData in given allocation to new value.
1925 
1926 The value of pointer `pUserData` is copied to allocation's `pUserData`.
1927 It is opaque, so you can use it however you want - e.g.
1928 as a pointer, ordinal number or some handle to you own data.
1929 */
1930 VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationUserData(
1931     VmaAllocator VMA_NOT_NULL allocator,
1932     VmaAllocation VMA_NOT_NULL allocation,
1933     void* VMA_NULLABLE pUserData);
1934 
1935 /** \brief Sets pName in given allocation to new value.
1936 
1937 `pName` must be either null, or pointer to a null-terminated string. The function
1938 makes local copy of the string and sets it as allocation's `pName`. String
1939 passed as pName doesn't need to be valid for whole lifetime of the allocation -
1940 you can free it after this call. String previously pointed by allocation's
1941 `pName` is freed from memory.
1942 */
1943 VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationName(
1944     VmaAllocator VMA_NOT_NULL allocator,
1945     VmaAllocation VMA_NOT_NULL allocation,
1946     const char* VMA_NULLABLE pName);
1947 
1948 /**
1949 \brief Given an allocation, returns Property Flags of its memory type.
1950 
1951 This is just a convenience function. Same information can be obtained using
1952 vmaGetAllocationInfo() + vmaGetMemoryProperties().
1953 */
1954 VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationMemoryProperties(
1955     VmaAllocator VMA_NOT_NULL allocator,
1956     VmaAllocation VMA_NOT_NULL allocation,
1957     VkMemoryPropertyFlags* VMA_NOT_NULL pFlags);
1958 
1959 /** \brief Maps memory represented by given allocation and returns pointer to it.
1960 
1961 Maps memory represented by given allocation to make it accessible to CPU code.
1962 When succeeded, `*ppData` contains pointer to first byte of this memory.
1963 
1964 \warning
1965 If the allocation is part of a bigger `VkDeviceMemory` block, returned pointer is
1966 correctly offsetted to the beginning of region assigned to this particular allocation.
1967 Unlike the result of `vkMapMemory`, it points to the allocation, not to the beginning of the whole block.
1968 You should not add VmaAllocationInfo::offset to it!
1969 
1970 Mapping is internally reference-counted and synchronized, so despite raw Vulkan
1971 function `vkMapMemory()` cannot be used to map same block of `VkDeviceMemory`
1972 multiple times simultaneously, it is safe to call this function on allocations
1973 assigned to the same memory block. Actual Vulkan memory will be mapped on first
1974 mapping and unmapped on last unmapping.
1975 
1976 If the function succeeded, you must call vmaUnmapMemory() to unmap the
1977 allocation when mapping is no longer needed or before freeing the allocation, at
1978 the latest.
1979 
1980 It also safe to call this function multiple times on the same allocation. You
1981 must call vmaUnmapMemory() same number of times as you called vmaMapMemory().
1982 
1983 It is also safe to call this function on allocation created with
1984 #VMA_ALLOCATION_CREATE_MAPPED_BIT flag. Its memory stays mapped all the time.
1985 You must still call vmaUnmapMemory() same number of times as you called
1986 vmaMapMemory(). You must not call vmaUnmapMemory() additional time to free the
1987 "0-th" mapping made automatically due to #VMA_ALLOCATION_CREATE_MAPPED_BIT flag.
1988 
1989 This function fails when used on allocation made in memory type that is not
1990 `HOST_VISIBLE`.
1991 
1992 This function doesn't automatically flush or invalidate caches.
1993 If the allocation is made from a memory types that is not `HOST_COHERENT`,
1994 you also need to use vmaInvalidateAllocation() / vmaFlushAllocation(), as required by Vulkan specification.
1995 */
1996 VMA_CALL_PRE VkResult VMA_CALL_POST vmaMapMemory(
1997     VmaAllocator VMA_NOT_NULL allocator,
1998     VmaAllocation VMA_NOT_NULL allocation,
1999     void* VMA_NULLABLE* VMA_NOT_NULL ppData);
2000 
2001 /** \brief Unmaps memory represented by given allocation, mapped previously using vmaMapMemory().
2002 
2003 For details, see description of vmaMapMemory().
2004 
2005 This function doesn't automatically flush or invalidate caches.
2006 If the allocation is made from a memory types that is not `HOST_COHERENT`,
2007 you also need to use vmaInvalidateAllocation() / vmaFlushAllocation(), as required by Vulkan specification.
2008 */
2009 VMA_CALL_PRE void VMA_CALL_POST vmaUnmapMemory(
2010     VmaAllocator VMA_NOT_NULL allocator,
2011     VmaAllocation VMA_NOT_NULL allocation);
2012 
2013 /** \brief Flushes memory of given allocation.
2014 
2015 Calls `vkFlushMappedMemoryRanges()` for memory associated with given range of given allocation.
2016 It needs to be called after writing to a mapped memory for memory types that are not `HOST_COHERENT`.
2017 Unmap operation doesn't do that automatically.
2018 
2019 - `offset` must be relative to the beginning of allocation.
2020 - `size` can be `VK_WHOLE_SIZE`. It means all memory from `offset` the the end of given allocation.
2021 - `offset` and `size` don't have to be aligned.
2022   They are internally rounded down/up to multiply of `nonCoherentAtomSize`.
2023 - If `size` is 0, this call is ignored.
2024 - If memory type that the `allocation` belongs to is not `HOST_VISIBLE` or it is `HOST_COHERENT`,
2025   this call is ignored.
2026 
2027 Warning! `offset` and `size` are relative to the contents of given `allocation`.
2028 If you mean whole allocation, you can pass 0 and `VK_WHOLE_SIZE`, respectively.
2029 Do not pass allocation's offset as `offset`!!!
2030 
2031 This function returns the `VkResult` from `vkFlushMappedMemoryRanges` if it is
2032 called, otherwise `VK_SUCCESS`.
2033 */
2034 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocation(
2035     VmaAllocator VMA_NOT_NULL allocator,
2036     VmaAllocation VMA_NOT_NULL allocation,
2037     VkDeviceSize offset,
2038     VkDeviceSize size);
2039 
2040 /** \brief Invalidates memory of given allocation.
2041 
2042 Calls `vkInvalidateMappedMemoryRanges()` for memory associated with given range of given allocation.
2043 It needs to be called before reading from a mapped memory for memory types that are not `HOST_COHERENT`.
2044 Map operation doesn't do that automatically.
2045 
2046 - `offset` must be relative to the beginning of allocation.
2047 - `size` can be `VK_WHOLE_SIZE`. It means all memory from `offset` the the end of given allocation.
2048 - `offset` and `size` don't have to be aligned.
2049   They are internally rounded down/up to multiply of `nonCoherentAtomSize`.
2050 - If `size` is 0, this call is ignored.
2051 - If memory type that the `allocation` belongs to is not `HOST_VISIBLE` or it is `HOST_COHERENT`,
2052   this call is ignored.
2053 
2054 Warning! `offset` and `size` are relative to the contents of given `allocation`.
2055 If you mean whole allocation, you can pass 0 and `VK_WHOLE_SIZE`, respectively.
2056 Do not pass allocation's offset as `offset`!!!
2057 
2058 This function returns the `VkResult` from `vkInvalidateMappedMemoryRanges` if
2059 it is called, otherwise `VK_SUCCESS`.
2060 */
2061 VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocation(
2062     VmaAllocator VMA_NOT_NULL allocator,
2063     VmaAllocation VMA_NOT_NULL allocation,
2064     VkDeviceSize offset,
2065     VkDeviceSize size);
2066 
2067 /** \brief Flushes memory of given set of allocations.
2068 
2069 Calls `vkFlushMappedMemoryRanges()` for memory associated with given ranges of given allocations.
2070 For more information, see documentation of vmaFlushAllocation().
2071 
2072 \param allocator
2073 \param allocationCount
2074 \param allocations
2075 \param offsets If not null, it must point to an array of offsets of regions to flush, relative to the beginning of respective allocations. Null means all ofsets are zero.
2076 \param sizes If not null, it must point to an array of sizes of regions to flush in respective allocations. Null means `VK_WHOLE_SIZE` for all allocations.
2077 
2078 This function returns the `VkResult` from `vkFlushMappedMemoryRanges` if it is
2079 called, otherwise `VK_SUCCESS`.
2080 */
2081 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocations(
2082     VmaAllocator VMA_NOT_NULL allocator,
2083     uint32_t allocationCount,
2084     const VmaAllocation VMA_NOT_NULL* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) allocations,
2085     const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) offsets,
2086     const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) sizes);
2087 
2088 /** \brief Invalidates memory of given set of allocations.
2089 
2090 Calls `vkInvalidateMappedMemoryRanges()` for memory associated with given ranges of given allocations.
2091 For more information, see documentation of vmaInvalidateAllocation().
2092 
2093 \param allocator
2094 \param allocationCount
2095 \param allocations
2096 \param offsets If not null, it must point to an array of offsets of regions to flush, relative to the beginning of respective allocations. Null means all ofsets are zero.
2097 \param sizes If not null, it must point to an array of sizes of regions to flush in respective allocations. Null means `VK_WHOLE_SIZE` for all allocations.
2098 
2099 This function returns the `VkResult` from `vkInvalidateMappedMemoryRanges` if it is
2100 called, otherwise `VK_SUCCESS`.
2101 */
2102 VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocations(
2103     VmaAllocator VMA_NOT_NULL allocator,
2104     uint32_t allocationCount,
2105     const VmaAllocation VMA_NOT_NULL* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) allocations,
2106     const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) offsets,
2107     const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) sizes);
2108 
2109 /** \brief Checks magic number in margins around all allocations in given memory types (in both default and custom pools) in search for corruptions.
2110 
2111 \param allocator
2112 \param memoryTypeBits Bit mask, where each bit set means that a memory type with that index should be checked.
2113 
2114 Corruption detection is enabled only when `VMA_DEBUG_DETECT_CORRUPTION` macro is defined to nonzero,
2115 `VMA_DEBUG_MARGIN` is defined to nonzero and only for memory types that are
2116 `HOST_VISIBLE` and `HOST_COHERENT`. For more information, see [Corruption detection](@ref debugging_memory_usage_corruption_detection).
2117 
2118 Possible return values:
2119 
2120 - `VK_ERROR_FEATURE_NOT_PRESENT` - corruption detection is not enabled for any of specified memory types.
2121 - `VK_SUCCESS` - corruption detection has been performed and succeeded.
2122 - `VK_ERROR_UNKNOWN` - corruption detection has been performed and found memory corruptions around one of the allocations.
2123   `VMA_ASSERT` is also fired in that case.
2124 - Other value: Error returned by Vulkan, e.g. memory mapping failure.
2125 */
2126 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckCorruption(
2127     VmaAllocator VMA_NOT_NULL allocator,
2128     uint32_t memoryTypeBits);
2129 
2130 /** \brief Begins defragmentation process.
2131 
2132 \param allocator Allocator object.
2133 \param pInfo Structure filled with parameters of defragmentation.
2134 \param[out] pContext Context object that must be passed to vmaEndDefragmentation() to finish defragmentation.
2135 \returns
2136 - `VK_SUCCESS` if defragmentation can begin.
2137 - `VK_ERROR_FEATURE_NOT_PRESENT` if defragmentation is not supported.
2138 
2139 For more information about defragmentation, see documentation chapter:
2140 [Defragmentation](@ref defragmentation).
2141 */
2142 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentation(
2143     VmaAllocator VMA_NOT_NULL allocator,
2144     const VmaDefragmentationInfo* VMA_NOT_NULL pInfo,
2145     VmaDefragmentationContext VMA_NULLABLE* VMA_NOT_NULL pContext);
2146 
2147 /** \brief Ends defragmentation process.
2148 
2149 \param allocator Allocator object.
2150 \param context Context object that has been created by vmaBeginDefragmentation().
2151 \param[out] pStats Optional stats for the defragmentation. Can be null.
2152 
2153 Use this function to finish defragmentation started by vmaBeginDefragmentation().
2154 */
2155 VMA_CALL_PRE void VMA_CALL_POST vmaEndDefragmentation(
2156     VmaAllocator VMA_NOT_NULL allocator,
2157     VmaDefragmentationContext VMA_NOT_NULL context,
2158     VmaDefragmentationStats* VMA_NULLABLE pStats);
2159 
2160 /** \brief Starts single defragmentation pass.
2161 
2162 \param allocator Allocator object.
2163 \param context Context object that has been created by vmaBeginDefragmentation().
2164 \param[out] pPassInfo Computed informations for current pass.
2165 \returns
2166 - `VK_SUCCESS` if no more moves are possible. Then you can omit call to vmaEndDefragmentationPass() and simply end whole defragmentation.
2167 - `VK_INCOMPLETE` if there are pending moves returned in `pPassInfo`. You need to perform them, call vmaEndDefragmentationPass(),
2168   and then preferably try another pass with vmaBeginDefragmentationPass().
2169 */
2170 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentationPass(
2171     VmaAllocator VMA_NOT_NULL allocator,
2172     VmaDefragmentationContext VMA_NOT_NULL context,
2173     VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo);
2174 
2175 /** \brief Ends single defragmentation pass.
2176 
2177 \param allocator Allocator object.
2178 \param context Context object that has been created by vmaBeginDefragmentation().
2179 \param pPassInfo Computed informations for current pass filled by vmaBeginDefragmentationPass() and possibly modified by you.
2180 
2181 Returns `VK_SUCCESS` if no more moves are possible or `VK_INCOMPLETE` if more defragmentations are possible.
2182 
2183 Ends incremental defragmentation pass and commits all defragmentation moves from `pPassInfo`.
2184 After this call:
2185 
2186 - Allocations at `pPassInfo[i].srcAllocation` that had `pPassInfo[i].operation ==` #VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY
2187   (which is the default) will be pointing to the new destination place.
2188 - Allocation at `pPassInfo[i].srcAllocation` that had `pPassInfo[i].operation ==` #VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY
2189   will be freed.
2190 
2191 If no more moves are possible you can end whole defragmentation.
2192 */
2193 VMA_CALL_PRE VkResult VMA_CALL_POST vmaEndDefragmentationPass(
2194     VmaAllocator VMA_NOT_NULL allocator,
2195     VmaDefragmentationContext VMA_NOT_NULL context,
2196     VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo);
2197 
2198 /** \brief Binds buffer to allocation.
2199 
2200 Binds specified buffer to region of memory represented by specified allocation.
2201 Gets `VkDeviceMemory` handle and offset from the allocation.
2202 If you want to create a buffer, allocate memory for it and bind them together separately,
2203 you should use this function for binding instead of standard `vkBindBufferMemory()`,
2204 because it ensures proper synchronization so that when a `VkDeviceMemory` object is used by multiple
2205 allocations, calls to `vkBind*Memory()` or `vkMapMemory()` won't happen from multiple threads simultaneously
2206 (which is illegal in Vulkan).
2207 
2208 It is recommended to use function vmaCreateBuffer() instead of this one.
2209 */
2210 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory(
2211     VmaAllocator VMA_NOT_NULL allocator,
2212     VmaAllocation VMA_NOT_NULL allocation,
2213     VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer);
2214 
2215 /** \brief Binds buffer to allocation with additional parameters.
2216 
2217 \param allocator
2218 \param allocation
2219 \param allocationLocalOffset Additional offset to be added while binding, relative to the beginning of the `allocation`. Normally it should be 0.
2220 \param buffer
2221 \param pNext A chain of structures to be attached to `VkBindBufferMemoryInfoKHR` structure used internally. Normally it should be null.
2222 
2223 This function is similar to vmaBindBufferMemory(), but it provides additional parameters.
2224 
2225 If `pNext` is not null, #VmaAllocator object must have been created with #VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT flag
2226 or with VmaAllocatorCreateInfo::vulkanApiVersion `>= VK_API_VERSION_1_1`. Otherwise the call fails.
2227 */
2228 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory2(
2229     VmaAllocator VMA_NOT_NULL allocator,
2230     VmaAllocation VMA_NOT_NULL allocation,
2231     VkDeviceSize allocationLocalOffset,
2232     VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer,
2233     const void* VMA_NULLABLE pNext);
2234 
2235 /** \brief Binds image to allocation.
2236 
2237 Binds specified image to region of memory represented by specified allocation.
2238 Gets `VkDeviceMemory` handle and offset from the allocation.
2239 If you want to create an image, allocate memory for it and bind them together separately,
2240 you should use this function for binding instead of standard `vkBindImageMemory()`,
2241 because it ensures proper synchronization so that when a `VkDeviceMemory` object is used by multiple
2242 allocations, calls to `vkBind*Memory()` or `vkMapMemory()` won't happen from multiple threads simultaneously
2243 (which is illegal in Vulkan).
2244 
2245 It is recommended to use function vmaCreateImage() instead of this one.
2246 */
2247 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory(
2248     VmaAllocator VMA_NOT_NULL allocator,
2249     VmaAllocation VMA_NOT_NULL allocation,
2250     VkImage VMA_NOT_NULL_NON_DISPATCHABLE image);
2251 
2252 /** \brief Binds image to allocation with additional parameters.
2253 
2254 \param allocator
2255 \param allocation
2256 \param allocationLocalOffset Additional offset to be added while binding, relative to the beginning of the `allocation`. Normally it should be 0.
2257 \param image
2258 \param pNext A chain of structures to be attached to `VkBindImageMemoryInfoKHR` structure used internally. Normally it should be null.
2259 
2260 This function is similar to vmaBindImageMemory(), but it provides additional parameters.
2261 
2262 If `pNext` is not null, #VmaAllocator object must have been created with #VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT flag
2263 or with VmaAllocatorCreateInfo::vulkanApiVersion `>= VK_API_VERSION_1_1`. Otherwise the call fails.
2264 */
2265 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory2(
2266     VmaAllocator VMA_NOT_NULL allocator,
2267     VmaAllocation VMA_NOT_NULL allocation,
2268     VkDeviceSize allocationLocalOffset,
2269     VkImage VMA_NOT_NULL_NON_DISPATCHABLE image,
2270     const void* VMA_NULLABLE pNext);
2271 
2272 /** \brief Creates a new `VkBuffer`, allocates and binds memory for it.
2273 
2274 \param allocator
2275 \param pBufferCreateInfo
2276 \param pAllocationCreateInfo
2277 \param[out] pBuffer Buffer that was created.
2278 \param[out] pAllocation Allocation that was created.
2279 \param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
2280 
2281 This function automatically:
2282 
2283 -# Creates buffer.
2284 -# Allocates appropriate memory for it.
2285 -# Binds the buffer with the memory.
2286 
2287 If any of these operations fail, buffer and allocation are not created,
2288 returned value is negative error code, `*pBuffer` and `*pAllocation` are null.
2289 
2290 If the function succeeded, you must destroy both buffer and allocation when you
2291 no longer need them using either convenience function vmaDestroyBuffer() or
2292 separately, using `vkDestroyBuffer()` and vmaFreeMemory().
2293 
2294 If #VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT flag was used,
2295 VK_KHR_dedicated_allocation extension is used internally to query driver whether
2296 it requires or prefers the new buffer to have dedicated allocation. If yes,
2297 and if dedicated allocation is possible
2298 (#VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT is not used), it creates dedicated
2299 allocation for this buffer, just like when using
2300 #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
2301 
2302 \note This function creates a new `VkBuffer`. Sub-allocation of parts of one large buffer,
2303 although recommended as a good practice, is out of scope of this library and could be implemented
2304 by the user as a higher-level logic on top of VMA.
2305 */
2306 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBuffer(
2307     VmaAllocator VMA_NOT_NULL allocator,
2308     const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
2309     const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
2310     VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer,
2311     VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
2312     VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
2313 
2314 /** \brief Creates a buffer with additional minimum alignment.
2315 
2316 Similar to vmaCreateBuffer() but provides additional parameter `minAlignment` which allows to specify custom,
2317 minimum alignment to be used when placing the buffer inside a larger memory block, which may be needed e.g.
2318 for interop with OpenGL.
2319 */
2320 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBufferWithAlignment(
2321     VmaAllocator VMA_NOT_NULL allocator,
2322     const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
2323     const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
2324     VkDeviceSize minAlignment,
2325     VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer,
2326     VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
2327     VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
2328 
2329 /** \brief Creates a new `VkBuffer`, binds already created memory for it.
2330 
2331 \param allocator
2332 \param allocation Allocation that provides memory to be used for binding new buffer to it.
2333 \param pBufferCreateInfo
2334 \param[out] pBuffer Buffer that was created.
2335 
2336 This function automatically:
2337 
2338 -# Creates buffer.
2339 -# Binds the buffer with the supplied memory.
2340 
2341 If any of these operations fail, buffer is not created,
2342 returned value is negative error code and `*pBuffer` is null.
2343 
2344 If the function succeeded, you must destroy the buffer when you
2345 no longer need it using `vkDestroyBuffer()`. If you want to also destroy the corresponding
2346 allocation you can use convenience function vmaDestroyBuffer().
2347 */
2348 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingBuffer(
2349     VmaAllocator VMA_NOT_NULL allocator,
2350     VmaAllocation VMA_NOT_NULL allocation,
2351     const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
2352     VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer);
2353 
2354 /** \brief Destroys Vulkan buffer and frees allocated memory.
2355 
2356 This is just a convenience function equivalent to:
2357 
2358 \code
2359 vkDestroyBuffer(device, buffer, allocationCallbacks);
2360 vmaFreeMemory(allocator, allocation);
2361 \endcode
2362 
2363 It it safe to pass null as buffer and/or allocation.
2364 */
2365 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyBuffer(
2366     VmaAllocator VMA_NOT_NULL allocator,
2367     VkBuffer VMA_NULLABLE_NON_DISPATCHABLE buffer,
2368     VmaAllocation VMA_NULLABLE allocation);
2369 
2370 /// Function similar to vmaCreateBuffer().
2371 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateImage(
2372     VmaAllocator VMA_NOT_NULL allocator,
2373     const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
2374     const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
2375     VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage,
2376     VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
2377     VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
2378 
2379 /// Function similar to vmaCreateAliasingBuffer().
2380 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingImage(
2381     VmaAllocator VMA_NOT_NULL allocator,
2382     VmaAllocation VMA_NOT_NULL allocation,
2383     const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
2384     VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage);
2385 
2386 /** \brief Destroys Vulkan image and frees allocated memory.
2387 
2388 This is just a convenience function equivalent to:
2389 
2390 \code
2391 vkDestroyImage(device, image, allocationCallbacks);
2392 vmaFreeMemory(allocator, allocation);
2393 \endcode
2394 
2395 It it safe to pass null as image and/or allocation.
2396 */
2397 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyImage(
2398     VmaAllocator VMA_NOT_NULL allocator,
2399     VkImage VMA_NULLABLE_NON_DISPATCHABLE image,
2400     VmaAllocation VMA_NULLABLE allocation);
2401 
2402 /** @} */
2403 
2404 /**
2405 \addtogroup group_virtual
2406 @{
2407 */
2408 
2409 /** \brief Creates new #VmaVirtualBlock object.
2410 
2411 \param pCreateInfo Parameters for creation.
2412 \param[out] pVirtualBlock Returned virtual block object or `VMA_NULL` if creation failed.
2413 */
2414 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateVirtualBlock(
2415     const VmaVirtualBlockCreateInfo* VMA_NOT_NULL pCreateInfo,
2416     VmaVirtualBlock VMA_NULLABLE* VMA_NOT_NULL pVirtualBlock);
2417 
2418 /** \brief Destroys #VmaVirtualBlock object.
2419 
2420 Please note that you should consciously handle virtual allocations that could remain unfreed in the block.
2421 You should either free them individually using vmaVirtualFree() or call vmaClearVirtualBlock()
2422 if you are sure this is what you want. If you do neither, an assert is called.
2423 
2424 If you keep pointers to some additional metadata associated with your virtual allocations in their `pUserData`,
2425 don't forget to free them.
2426 */
2427 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyVirtualBlock(
2428     VmaVirtualBlock VMA_NULLABLE virtualBlock);
2429 
2430 /** \brief Returns true of the #VmaVirtualBlock is empty - contains 0 virtual allocations and has all its space available for new allocations.
2431 */
2432 VMA_CALL_PRE VkBool32 VMA_CALL_POST vmaIsVirtualBlockEmpty(
2433     VmaVirtualBlock VMA_NOT_NULL virtualBlock);
2434 
2435 /** \brief Returns information about a specific virtual allocation within a virtual block, like its size and `pUserData` pointer.
2436 */
2437 VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualAllocationInfo(
2438     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2439     VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation, VmaVirtualAllocationInfo* VMA_NOT_NULL pVirtualAllocInfo);
2440 
2441 /** \brief Allocates new virtual allocation inside given #VmaVirtualBlock.
2442 
2443 If the allocation fails due to not enough free space available, `VK_ERROR_OUT_OF_DEVICE_MEMORY` is returned
2444 (despite the function doesn't ever allocate actual GPU memory).
2445 `pAllocation` is then set to `VK_NULL_HANDLE` and `pOffset`, if not null, it set to `UINT64_MAX`.
2446 
2447 \param virtualBlock Virtual block
2448 \param pCreateInfo Parameters for the allocation
2449 \param[out] pAllocation Returned handle of the new allocation
2450 \param[out] pOffset Returned offset of the new allocation. Optional, can be null.
2451 */
2452 VMA_CALL_PRE VkResult VMA_CALL_POST vmaVirtualAllocate(
2453     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2454     const VmaVirtualAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
2455     VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pAllocation,
2456     VkDeviceSize* VMA_NULLABLE pOffset);
2457 
2458 /** \brief Frees virtual allocation inside given #VmaVirtualBlock.
2459 
2460 It is correct to call this function with `allocation == VK_NULL_HANDLE` - it does nothing.
2461 */
2462 VMA_CALL_PRE void VMA_CALL_POST vmaVirtualFree(
2463     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2464     VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE allocation);
2465 
2466 /** \brief Frees all virtual allocations inside given #VmaVirtualBlock.
2467 
2468 You must either call this function or free each virtual allocation individually with vmaVirtualFree()
2469 before destroying a virtual block. Otherwise, an assert is called.
2470 
2471 If you keep pointer to some additional metadata associated with your virtual allocation in its `pUserData`,
2472 don't forget to free it as well.
2473 */
2474 VMA_CALL_PRE void VMA_CALL_POST vmaClearVirtualBlock(
2475     VmaVirtualBlock VMA_NOT_NULL virtualBlock);
2476 
2477 /** \brief Changes custom pointer associated with given virtual allocation.
2478 */
2479 VMA_CALL_PRE void VMA_CALL_POST vmaSetVirtualAllocationUserData(
2480     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2481     VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation,
2482     void* VMA_NULLABLE pUserData);
2483 
2484 /** \brief Calculates and returns statistics about virtual allocations and memory usage in given #VmaVirtualBlock.
2485 
2486 This function is fast to call. For more detailed statistics, see vmaCalculateVirtualBlockStatistics().
2487 */
2488 VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualBlockStatistics(
2489     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2490     VmaStatistics* VMA_NOT_NULL pStats);
2491 
2492 /** \brief Calculates and returns detailed statistics about virtual allocations and memory usage in given #VmaVirtualBlock.
2493 
2494 This function is slow to call. Use for debugging purposes.
2495 For less detailed statistics, see vmaGetVirtualBlockStatistics().
2496 */
2497 VMA_CALL_PRE void VMA_CALL_POST vmaCalculateVirtualBlockStatistics(
2498     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2499     VmaDetailedStatistics* VMA_NOT_NULL pStats);
2500 
2501 /** @} */
2502 
2503 #if VMA_STATS_STRING_ENABLED
2504 /**
2505 \addtogroup group_stats
2506 @{
2507 */
2508 
2509 /** \brief Builds and returns a null-terminated string in JSON format with information about given #VmaVirtualBlock.
2510 \param virtualBlock Virtual block.
2511 \param[out] ppStatsString Returned string.
2512 \param detailedMap Pass `VK_FALSE` to only obtain statistics as returned by vmaCalculateVirtualBlockStatistics(). Pass `VK_TRUE` to also obtain full list of allocations and free spaces.
2513 
2514 Returned string must be freed using vmaFreeVirtualBlockStatsString().
2515 */
2516 VMA_CALL_PRE void VMA_CALL_POST vmaBuildVirtualBlockStatsString(
2517     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2518     char* VMA_NULLABLE* VMA_NOT_NULL ppStatsString,
2519     VkBool32 detailedMap);
2520 
2521 /// Frees a string returned by vmaBuildVirtualBlockStatsString().
2522 VMA_CALL_PRE void VMA_CALL_POST vmaFreeVirtualBlockStatsString(
2523     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
2524     char* VMA_NULLABLE pStatsString);
2525 
2526 /** \brief Builds and returns statistics as a null-terminated string in JSON format.
2527 \param allocator
2528 \param[out] ppStatsString Must be freed using vmaFreeStatsString() function.
2529 \param detailedMap
2530 */
2531 VMA_CALL_PRE void VMA_CALL_POST vmaBuildStatsString(
2532     VmaAllocator VMA_NOT_NULL allocator,
2533     char* VMA_NULLABLE* VMA_NOT_NULL ppStatsString,
2534     VkBool32 detailedMap);
2535 
2536 VMA_CALL_PRE void VMA_CALL_POST vmaFreeStatsString(
2537     VmaAllocator VMA_NOT_NULL allocator,
2538     char* VMA_NULLABLE pStatsString);
2539 
2540 /** @} */
2541 
2542 #endif // VMA_STATS_STRING_ENABLED
2543 
2544 #endif // _VMA_FUNCTION_HEADERS
2545 
2546 #ifdef __cplusplus
2547 }
2548 #endif
2549 
2550 #endif // AMD_VULKAN_MEMORY_ALLOCATOR_H
2551 
2552 ////////////////////////////////////////////////////////////////////////////////
2553 ////////////////////////////////////////////////////////////////////////////////
2554 //
2555 //    IMPLEMENTATION
2556 //
2557 ////////////////////////////////////////////////////////////////////////////////
2558 ////////////////////////////////////////////////////////////////////////////////
2559 
2560 // For Visual Studio IntelliSense.
2561 #if defined(__cplusplus) && defined(__INTELLISENSE__)
2562 #define VMA_IMPLEMENTATION
2563 #endif
2564 
2565 #ifdef VMA_IMPLEMENTATION
2566 #undef VMA_IMPLEMENTATION
2567 
2568 #include <cstdint>
2569 #include <cstdlib>
2570 #include <cstring>
2571 #include <utility>
2572 #include <type_traits>
2573 
2574 #ifdef _MSC_VER
2575     #include <intrin.h> // For functions like __popcnt, _BitScanForward etc.
2576 #endif
2577 #if __cplusplus >= 202002L || _MSVC_LANG >= 202002L // C++20
2578     #include <bit> // For std::popcount
2579 #endif
2580 
2581 /*******************************************************************************
2582 CONFIGURATION SECTION
2583 
2584 Define some of these macros before each #include of this header or change them
2585 here if you need other then default behavior depending on your environment.
2586 */
2587 #ifndef _VMA_CONFIGURATION
2588 
2589 /*
2590 Define this macro to 1 to make the library fetch pointers to Vulkan functions
2591 internally, like:
2592 
2593     vulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
2594 */
2595 #if !defined(VMA_STATIC_VULKAN_FUNCTIONS) && !defined(VK_NO_PROTOTYPES)
2596     #define VMA_STATIC_VULKAN_FUNCTIONS 1
2597 #endif
2598 
2599 /*
2600 Define this macro to 1 to make the library fetch pointers to Vulkan functions
2601 internally, like:
2602 
2603     vulkanFunctions.vkAllocateMemory = (PFN_vkAllocateMemory)vkGetDeviceProcAddr(device, "vkAllocateMemory");
2604 
2605 To use this feature in new versions of VMA you now have to pass
2606 VmaVulkanFunctions::vkGetInstanceProcAddr and vkGetDeviceProcAddr as
2607 VmaAllocatorCreateInfo::pVulkanFunctions. Other members can be null.
2608 */
2609 #if !defined(VMA_DYNAMIC_VULKAN_FUNCTIONS)
2610     #define VMA_DYNAMIC_VULKAN_FUNCTIONS 1
2611 #endif
2612 
2613 #ifndef VMA_USE_STL_SHARED_MUTEX
2614     // Compiler conforms to C++17.
2615     #if __cplusplus >= 201703L
2616         #define VMA_USE_STL_SHARED_MUTEX 1
2617     // Visual studio defines __cplusplus properly only when passed additional parameter: /Zc:__cplusplus
2618     // Otherwise it is always 199711L, despite shared_mutex works since Visual Studio 2015 Update 2.
2619     #elif defined(_MSC_FULL_VER) && _MSC_FULL_VER >= 190023918 && __cplusplus == 199711L && _MSVC_LANG >= 201703L
2620         #define VMA_USE_STL_SHARED_MUTEX 1
2621     #else
2622         #define VMA_USE_STL_SHARED_MUTEX 0
2623     #endif
2624 #endif
2625 
2626 /*
2627 Define this macro to include custom header files without having to edit this file directly, e.g.:
2628 
2629     // Inside of "my_vma_configuration_user_includes.h":
2630 
2631     #include "my_custom_assert.h" // for MY_CUSTOM_ASSERT
2632     #include "my_custom_min.h" // for my_custom_min
2633     #include <algorithm>
2634     #include <mutex>
2635 
2636     // Inside a different file, which includes "vk_mem_alloc.h":
2637 
2638     #define VMA_CONFIGURATION_USER_INCLUDES_H "my_vma_configuration_user_includes.h"
2639     #define VMA_ASSERT(expr) MY_CUSTOM_ASSERT(expr)
2640     #define VMA_MIN(v1, v2)  (my_custom_min(v1, v2))
2641     #include "vk_mem_alloc.h"
2642     ...
2643 
2644 The following headers are used in this CONFIGURATION section only, so feel free to
2645 remove them if not needed.
2646 */
2647 #if !defined(VMA_CONFIGURATION_USER_INCLUDES_H)
2648     #include <cassert> // for assert
2649     #include <algorithm> // for min, max
2650     #include <mutex>
2651 #else
2652     #include VMA_CONFIGURATION_USER_INCLUDES_H
2653 #endif
2654 
2655 #ifndef VMA_NULL
2656    // Value used as null pointer. Define it to e.g.: nullptr, NULL, 0, (void*)0.
2657    #define VMA_NULL   nullptr
2658 #endif
2659 
2660 #if defined(__ANDROID_API__) && (__ANDROID_API__ < 16)
2661 #include <cstdlib>
vma_aligned_alloc(size_t alignment,size_t size)2662 static void* vma_aligned_alloc(size_t alignment, size_t size)
2663 {
2664     // alignment must be >= sizeof(void*)
2665     if(alignment < sizeof(void*))
2666     {
2667         alignment = sizeof(void*);
2668     }
2669 
2670     return memalign(alignment, size);
2671 }
2672 #elif defined(__APPLE__) || defined(__ANDROID__) || (defined(__linux__) && defined(__GLIBCXX__) && !defined(_GLIBCXX_HAVE_ALIGNED_ALLOC))
2673 #include <cstdlib>
2674 
2675 #if defined(__APPLE__)
2676 #include <AvailabilityMacros.h>
2677 #endif
2678 
vma_aligned_alloc(size_t alignment,size_t size)2679 static void* vma_aligned_alloc(size_t alignment, size_t size)
2680 {
2681     // Unfortunately, aligned_alloc causes VMA to crash due to it returning null pointers. (At least under 11.4)
2682     // Therefore, for now disable this specific exception until a proper solution is found.
2683     //#if defined(__APPLE__) && (defined(MAC_OS_X_VERSION_10_16) || defined(__IPHONE_14_0))
2684     //#if MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_16 || __IPHONE_OS_VERSION_MAX_ALLOWED >= __IPHONE_14_0
2685     //    // For C++14, usr/include/malloc/_malloc.h declares aligned_alloc()) only
2686     //    // with the MacOSX11.0 SDK in Xcode 12 (which is what adds
2687     //    // MAC_OS_X_VERSION_10_16), even though the function is marked
2688     //    // availabe for 10.15. That is why the preprocessor checks for 10.16 but
2689     //    // the __builtin_available checks for 10.15.
2690     //    // People who use C++17 could call aligned_alloc with the 10.15 SDK already.
2691     //    if (__builtin_available(macOS 10.15, iOS 13, *))
2692     //        return aligned_alloc(alignment, size);
2693     //#endif
2694     //#endif
2695 
2696     // alignment must be >= sizeof(void*)
2697     if(alignment < sizeof(void*))
2698     {
2699         alignment = sizeof(void*);
2700     }
2701 
2702     void *pointer;
2703     if(posix_memalign(&pointer, alignment, size) == 0)
2704         return pointer;
2705     return VMA_NULL;
2706 }
2707 #elif defined(_WIN32)
vma_aligned_alloc(size_t alignment,size_t size)2708 static void* vma_aligned_alloc(size_t alignment, size_t size)
2709 {
2710     return _aligned_malloc(size, alignment);
2711 }
2712 #else
vma_aligned_alloc(size_t alignment,size_t size)2713 static void* vma_aligned_alloc(size_t alignment, size_t size)
2714 {
2715     return aligned_alloc(alignment, size);
2716 }
2717 #endif
2718 
2719 #if defined(_WIN32)
vma_aligned_free(void * ptr)2720 static void vma_aligned_free(void* ptr)
2721 {
2722     _aligned_free(ptr);
2723 }
2724 #else
vma_aligned_free(void * VMA_NULLABLE ptr)2725 static void vma_aligned_free(void* VMA_NULLABLE ptr)
2726 {
2727     free(ptr);
2728 }
2729 #endif
2730 
2731 // If your compiler is not compatible with C++11 and definition of
2732 // aligned_alloc() function is missing, uncommeting following line may help:
2733 
2734 //#include <malloc.h>
2735 
2736 // Normal assert to check for programmer's errors, especially in Debug configuration.
2737 #ifndef VMA_ASSERT
2738    #ifdef NDEBUG
2739        #define VMA_ASSERT(expr)
2740    #else
2741        #define VMA_ASSERT(expr)         assert(expr)
2742    #endif
2743 #endif
2744 
2745 // Assert that will be called very often, like inside data structures e.g. operator[].
2746 // Making it non-empty can make program slow.
2747 #ifndef VMA_HEAVY_ASSERT
2748    #ifdef NDEBUG
2749        #define VMA_HEAVY_ASSERT(expr)
2750    #else
2751        #define VMA_HEAVY_ASSERT(expr)   //VMA_ASSERT(expr)
2752    #endif
2753 #endif
2754 
2755 #ifndef VMA_ALIGN_OF
2756    #define VMA_ALIGN_OF(type)       (__alignof(type))
2757 #endif
2758 
2759 #ifndef VMA_SYSTEM_ALIGNED_MALLOC
2760    #define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) vma_aligned_alloc((alignment), (size))
2761 #endif
2762 
2763 #ifndef VMA_SYSTEM_ALIGNED_FREE
2764    // VMA_SYSTEM_FREE is the old name, but might have been defined by the user
2765    #if defined(VMA_SYSTEM_FREE)
2766       #define VMA_SYSTEM_ALIGNED_FREE(ptr)     VMA_SYSTEM_FREE(ptr)
2767    #else
2768       #define VMA_SYSTEM_ALIGNED_FREE(ptr)     vma_aligned_free(ptr)
2769     #endif
2770 #endif
2771 
2772 #ifndef VMA_COUNT_BITS_SET
2773     // Returns number of bits set to 1 in (v)
2774     #define VMA_COUNT_BITS_SET(v) VmaCountBitsSet(v)
2775 #endif
2776 
2777 #ifndef VMA_BITSCAN_LSB
2778     // Scans integer for index of first nonzero value from the Least Significant Bit (LSB). If mask is 0 then returns UINT8_MAX
2779     #define VMA_BITSCAN_LSB(mask) VmaBitScanLSB(mask)
2780 #endif
2781 
2782 #ifndef VMA_BITSCAN_MSB
2783     // Scans integer for index of first nonzero value from the Most Significant Bit (MSB). If mask is 0 then returns UINT8_MAX
2784     #define VMA_BITSCAN_MSB(mask) VmaBitScanMSB(mask)
2785 #endif
2786 
2787 #ifndef VMA_MIN
2788    #define VMA_MIN(v1, v2)    ((std::min)((v1), (v2)))
2789 #endif
2790 
2791 #ifndef VMA_MAX
2792    #define VMA_MAX(v1, v2)    ((std::max)((v1), (v2)))
2793 #endif
2794 
2795 #ifndef VMA_SWAP
2796    #define VMA_SWAP(v1, v2)   std::swap((v1), (v2))
2797 #endif
2798 
2799 #ifndef VMA_SORT
2800    #define VMA_SORT(beg, end, cmp)  std::sort(beg, end, cmp)
2801 #endif
2802 
2803 #ifndef VMA_DEBUG_LOG
2804    #define VMA_DEBUG_LOG(format, ...)
2805    /*
2806    #define VMA_DEBUG_LOG(format, ...) do { \
2807        printf(format, __VA_ARGS__); \
2808        printf("\n"); \
2809    } while(false)
2810    */
2811 #endif
2812 
2813 // Define this macro to 1 to enable functions: vmaBuildStatsString, vmaFreeStatsString.
2814 #if VMA_STATS_STRING_ENABLED
VmaUint32ToStr(char * VMA_NOT_NULL outStr,size_t strLen,uint32_t num)2815     static inline void VmaUint32ToStr(char* VMA_NOT_NULL outStr, size_t strLen, uint32_t num)
2816     {
2817         snprintf(outStr, strLen, "%u", static_cast<unsigned int>(num));
2818     }
VmaUint64ToStr(char * VMA_NOT_NULL outStr,size_t strLen,uint64_t num)2819     static inline void VmaUint64ToStr(char* VMA_NOT_NULL outStr, size_t strLen, uint64_t num)
2820     {
2821         snprintf(outStr, strLen, "%llu", static_cast<unsigned long long>(num));
2822     }
VmaPtrToStr(char * VMA_NOT_NULL outStr,size_t strLen,const void * ptr)2823     static inline void VmaPtrToStr(char* VMA_NOT_NULL outStr, size_t strLen, const void* ptr)
2824     {
2825         snprintf(outStr, strLen, "%p", ptr);
2826     }
2827 #endif
2828 
2829 #ifndef VMA_MUTEX
2830     class VmaMutex
2831     {
2832     public:
Lock()2833         void Lock() { m_Mutex.lock(); }
Unlock()2834         void Unlock() { m_Mutex.unlock(); }
TryLock()2835         bool TryLock() { return m_Mutex.try_lock(); }
2836     private:
2837         std::mutex m_Mutex;
2838     };
2839     #define VMA_MUTEX VmaMutex
2840 #endif
2841 
2842 // Read-write mutex, where "read" is shared access, "write" is exclusive access.
2843 #ifndef VMA_RW_MUTEX
2844     #if VMA_USE_STL_SHARED_MUTEX
2845         // Use std::shared_mutex from C++17.
2846         #include <shared_mutex>
2847         class VmaRWMutex
2848         {
2849         public:
LockRead()2850             void LockRead() { m_Mutex.lock_shared(); }
UnlockRead()2851             void UnlockRead() { m_Mutex.unlock_shared(); }
TryLockRead()2852             bool TryLockRead() { return m_Mutex.try_lock_shared(); }
LockWrite()2853             void LockWrite() { m_Mutex.lock(); }
UnlockWrite()2854             void UnlockWrite() { m_Mutex.unlock(); }
TryLockWrite()2855             bool TryLockWrite() { return m_Mutex.try_lock(); }
2856         private:
2857             std::shared_mutex m_Mutex;
2858         };
2859         #define VMA_RW_MUTEX VmaRWMutex
2860     #elif defined(_WIN32) && defined(WINVER) && WINVER >= 0x0600
2861         // Use SRWLOCK from WinAPI.
2862         // Minimum supported client = Windows Vista, server = Windows Server 2008.
2863         class VmaRWMutex
2864         {
2865         public:
VmaRWMutex()2866             VmaRWMutex() { InitializeSRWLock(&m_Lock); }
LockRead()2867             void LockRead() { AcquireSRWLockShared(&m_Lock); }
UnlockRead()2868             void UnlockRead() { ReleaseSRWLockShared(&m_Lock); }
TryLockRead()2869             bool TryLockRead() { return TryAcquireSRWLockShared(&m_Lock) != FALSE; }
LockWrite()2870             void LockWrite() { AcquireSRWLockExclusive(&m_Lock); }
UnlockWrite()2871             void UnlockWrite() { ReleaseSRWLockExclusive(&m_Lock); }
TryLockWrite()2872             bool TryLockWrite() { return TryAcquireSRWLockExclusive(&m_Lock) != FALSE; }
2873         private:
2874             SRWLOCK m_Lock;
2875         };
2876         #define VMA_RW_MUTEX VmaRWMutex
2877     #else
2878         // Less efficient fallback: Use normal mutex.
2879         class VmaRWMutex
2880         {
2881         public:
LockRead()2882             void LockRead() { m_Mutex.Lock(); }
UnlockRead()2883             void UnlockRead() { m_Mutex.Unlock(); }
TryLockRead()2884             bool TryLockRead() { return m_Mutex.TryLock(); }
LockWrite()2885             void LockWrite() { m_Mutex.Lock(); }
UnlockWrite()2886             void UnlockWrite() { m_Mutex.Unlock(); }
TryLockWrite()2887             bool TryLockWrite() { return m_Mutex.TryLock(); }
2888         private:
2889             VMA_MUTEX m_Mutex;
2890         };
2891         #define VMA_RW_MUTEX VmaRWMutex
2892     #endif // #if VMA_USE_STL_SHARED_MUTEX
2893 #endif // #ifndef VMA_RW_MUTEX
2894 
2895 /*
2896 If providing your own implementation, you need to implement a subset of std::atomic.
2897 */
2898 #ifndef VMA_ATOMIC_UINT32
2899     #include <atomic>
2900     #define VMA_ATOMIC_UINT32 std::atomic<uint32_t>
2901 #endif
2902 
2903 #ifndef VMA_ATOMIC_UINT64
2904     #include <atomic>
2905     #define VMA_ATOMIC_UINT64 std::atomic<uint64_t>
2906 #endif
2907 
2908 #ifndef VMA_DEBUG_ALWAYS_DEDICATED_MEMORY
2909     /**
2910     Every allocation will have its own memory block.
2911     Define to 1 for debugging purposes only.
2912     */
2913     #define VMA_DEBUG_ALWAYS_DEDICATED_MEMORY (0)
2914 #endif
2915 
2916 #ifndef VMA_MIN_ALIGNMENT
2917     /**
2918     Minimum alignment of all allocations, in bytes.
2919     Set to more than 1 for debugging purposes. Must be power of two.
2920     */
2921     #ifdef VMA_DEBUG_ALIGNMENT // Old name
2922         #define VMA_MIN_ALIGNMENT VMA_DEBUG_ALIGNMENT
2923     #else
2924         #define VMA_MIN_ALIGNMENT (1)
2925     #endif
2926 #endif
2927 
2928 #ifndef VMA_DEBUG_MARGIN
2929     /**
2930     Minimum margin after every allocation, in bytes.
2931     Set nonzero for debugging purposes only.
2932     */
2933     #define VMA_DEBUG_MARGIN (0)
2934 #endif
2935 
2936 #ifndef VMA_DEBUG_INITIALIZE_ALLOCATIONS
2937     /**
2938     Define this macro to 1 to automatically fill new allocations and destroyed
2939     allocations with some bit pattern.
2940     */
2941     #define VMA_DEBUG_INITIALIZE_ALLOCATIONS (0)
2942 #endif
2943 
2944 #ifndef VMA_DEBUG_DETECT_CORRUPTION
2945     /**
2946     Define this macro to 1 together with non-zero value of VMA_DEBUG_MARGIN to
2947     enable writing magic value to the margin after every allocation and
2948     validating it, so that memory corruptions (out-of-bounds writes) are detected.
2949     */
2950     #define VMA_DEBUG_DETECT_CORRUPTION (0)
2951 #endif
2952 
2953 #ifndef VMA_DEBUG_GLOBAL_MUTEX
2954     /**
2955     Set this to 1 for debugging purposes only, to enable single mutex protecting all
2956     entry calls to the library. Can be useful for debugging multithreading issues.
2957     */
2958     #define VMA_DEBUG_GLOBAL_MUTEX (0)
2959 #endif
2960 
2961 #ifndef VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY
2962     /**
2963     Minimum value for VkPhysicalDeviceLimits::bufferImageGranularity.
2964     Set to more than 1 for debugging purposes only. Must be power of two.
2965     */
2966     #define VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY (1)
2967 #endif
2968 
2969 #ifndef VMA_DEBUG_DONT_EXCEED_MAX_MEMORY_ALLOCATION_COUNT
2970     /*
2971     Set this to 1 to make VMA never exceed VkPhysicalDeviceLimits::maxMemoryAllocationCount
2972     and return error instead of leaving up to Vulkan implementation what to do in such cases.
2973     */
2974     #define VMA_DEBUG_DONT_EXCEED_MAX_MEMORY_ALLOCATION_COUNT (0)
2975 #endif
2976 
2977 #ifndef VMA_SMALL_HEAP_MAX_SIZE
2978    /// Maximum size of a memory heap in Vulkan to consider it "small".
2979    #define VMA_SMALL_HEAP_MAX_SIZE (1024ull * 1024 * 1024)
2980 #endif
2981 
2982 #ifndef VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE
2983    /// Default size of a block allocated as single VkDeviceMemory from a "large" heap.
2984    #define VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE (256ull * 1024 * 1024)
2985 #endif
2986 
2987 /*
2988 Mapping hysteresis is a logic that launches when vmaMapMemory/vmaUnmapMemory is called
2989 or a persistently mapped allocation is created and destroyed several times in a row.
2990 It keeps additional +1 mapping of a device memory block to prevent calling actual
2991 vkMapMemory/vkUnmapMemory too many times, which may improve performance and help
2992 tools like RenderDOc.
2993 */
2994 #ifndef VMA_MAPPING_HYSTERESIS_ENABLED
2995     #define VMA_MAPPING_HYSTERESIS_ENABLED 1
2996 #endif
2997 
2998 #ifndef VMA_CLASS_NO_COPY
2999     #define VMA_CLASS_NO_COPY(className) \
3000         private: \
3001             className(const className&) = delete; \
3002             className& operator=(const className&) = delete;
3003 #endif
3004 
3005 #define VMA_VALIDATE(cond) do { if(!(cond)) { \
3006         VMA_ASSERT(0 && "Validation failed: " #cond); \
3007         return false; \
3008     } } while(false)
3009 
3010 /*******************************************************************************
3011 END OF CONFIGURATION
3012 */
3013 #endif // _VMA_CONFIGURATION
3014 
3015 
3016 static const uint8_t VMA_ALLOCATION_FILL_PATTERN_CREATED = 0xDC;
3017 static const uint8_t VMA_ALLOCATION_FILL_PATTERN_DESTROYED = 0xEF;
3018 // Decimal 2139416166, float NaN, little-endian binary 66 E6 84 7F.
3019 static const uint32_t VMA_CORRUPTION_DETECTION_MAGIC_VALUE = 0x7F84E666;
3020 
3021 // Copy of some Vulkan definitions so we don't need to check their existence just to handle few constants.
3022 static const uint32_t VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY = 0x00000040;
3023 static const uint32_t VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY = 0x00000080;
3024 static const uint32_t VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY = 0x00020000;
3025 static const uint32_t VK_IMAGE_CREATE_DISJOINT_BIT_COPY = 0x00000200;
3026 static const int32_t VK_IMAGE_TILING_DRM_FORMAT_MODIFIER_EXT_COPY = 1000158000;
3027 static const uint32_t VMA_ALLOCATION_INTERNAL_STRATEGY_MIN_OFFSET = 0x10000000u;
3028 static const uint32_t VMA_ALLOCATION_TRY_COUNT = 32;
3029 static const uint32_t VMA_VENDOR_ID_AMD = 4098;
3030 
3031 // This one is tricky. Vulkan specification defines this code as available since
3032 // Vulkan 1.0, but doesn't actually define it in Vulkan SDK earlier than 1.2.131.
3033 // See pull request #207.
3034 #define VK_ERROR_UNKNOWN_COPY ((VkResult)-13)
3035 
3036 
3037 #if VMA_STATS_STRING_ENABLED
3038 // Correspond to values of enum VmaSuballocationType.
3039 static const char* VMA_SUBALLOCATION_TYPE_NAMES[] =
3040 {
3041     "FREE",
3042     "UNKNOWN",
3043     "BUFFER",
3044     "IMAGE_UNKNOWN",
3045     "IMAGE_LINEAR",
3046     "IMAGE_OPTIMAL",
3047 };
3048 #endif
3049 
3050 static VkAllocationCallbacks VmaEmptyAllocationCallbacks =
3051     { VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL };
3052 
3053 
3054 #ifndef _VMA_ENUM_DECLARATIONS
3055 
3056 enum VmaSuballocationType
3057 {
3058     VMA_SUBALLOCATION_TYPE_FREE = 0,
3059     VMA_SUBALLOCATION_TYPE_UNKNOWN = 1,
3060     VMA_SUBALLOCATION_TYPE_BUFFER = 2,
3061     VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN = 3,
3062     VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR = 4,
3063     VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL = 5,
3064     VMA_SUBALLOCATION_TYPE_MAX_ENUM = 0x7FFFFFFF
3065 };
3066 
3067 enum VMA_CACHE_OPERATION
3068 {
3069     VMA_CACHE_FLUSH,
3070     VMA_CACHE_INVALIDATE
3071 };
3072 
3073 enum class VmaAllocationRequestType
3074 {
3075     Normal,
3076     TLSF,
3077     // Used by "Linear" algorithm.
3078     UpperAddress,
3079     EndOf1st,
3080     EndOf2nd,
3081 };
3082 
3083 #endif // _VMA_ENUM_DECLARATIONS
3084 
3085 #ifndef _VMA_FORWARD_DECLARATIONS
3086 // Opaque handle used by allocation algorithms to identify single allocation in any conforming way.
3087 VK_DEFINE_NON_DISPATCHABLE_HANDLE(VmaAllocHandle);
3088 
3089 struct VmaMutexLock;
3090 struct VmaMutexLockRead;
3091 struct VmaMutexLockWrite;
3092 
3093 template<typename T>
3094 struct AtomicTransactionalIncrement;
3095 
3096 template<typename T>
3097 struct VmaStlAllocator;
3098 
3099 template<typename T, typename AllocatorT>
3100 class VmaVector;
3101 
3102 template<typename T, typename AllocatorT, size_t N>
3103 class VmaSmallVector;
3104 
3105 template<typename T>
3106 class VmaPoolAllocator;
3107 
3108 template<typename T>
3109 struct VmaListItem;
3110 
3111 template<typename T>
3112 class VmaRawList;
3113 
3114 template<typename T, typename AllocatorT>
3115 class VmaList;
3116 
3117 template<typename ItemTypeTraits>
3118 class VmaIntrusiveLinkedList;
3119 
3120 // Unused in this version
3121 #if 0
3122 template<typename T1, typename T2>
3123 struct VmaPair;
3124 template<typename FirstT, typename SecondT>
3125 struct VmaPairFirstLess;
3126 
3127 template<typename KeyT, typename ValueT>
3128 class VmaMap;
3129 #endif
3130 
3131 #if VMA_STATS_STRING_ENABLED
3132 class VmaStringBuilder;
3133 class VmaJsonWriter;
3134 #endif
3135 
3136 class VmaDeviceMemoryBlock;
3137 
3138 struct VmaDedicatedAllocationListItemTraits;
3139 class VmaDedicatedAllocationList;
3140 
3141 struct VmaSuballocation;
3142 struct VmaSuballocationOffsetLess;
3143 struct VmaSuballocationOffsetGreater;
3144 struct VmaSuballocationItemSizeLess;
3145 
3146 typedef VmaList<VmaSuballocation, VmaStlAllocator<VmaSuballocation>> VmaSuballocationList;
3147 
3148 struct VmaAllocationRequest;
3149 
3150 class VmaBlockMetadata;
3151 class VmaBlockMetadata_Linear;
3152 class VmaBlockMetadata_TLSF;
3153 
3154 class VmaBlockVector;
3155 
3156 struct VmaPoolListItemTraits;
3157 
3158 struct VmaCurrentBudgetData;
3159 
3160 class VmaAllocationObjectAllocator;
3161 
3162 #endif // _VMA_FORWARD_DECLARATIONS
3163 
3164 
3165 #ifndef _VMA_FUNCTIONS
3166 
3167 /*
3168 Returns number of bits set to 1 in (v).
3169 
3170 On specific platforms and compilers you can use instrinsics like:
3171 
3172 Visual Studio:
3173     return __popcnt(v);
3174 GCC, Clang:
3175     return static_cast<uint32_t>(__builtin_popcount(v));
3176 
3177 Define macro VMA_COUNT_BITS_SET to provide your optimized implementation.
3178 But you need to check in runtime whether user's CPU supports these, as some old processors don't.
3179 */
VmaCountBitsSet(uint32_t v)3180 static inline uint32_t VmaCountBitsSet(uint32_t v)
3181 {
3182 #if __cplusplus >= 202002L || _MSVC_LANG >= 202002L // C++20
3183     return std::popcount(v);
3184 #else
3185     uint32_t c = v - ((v >> 1) & 0x55555555);
3186     c = ((c >> 2) & 0x33333333) + (c & 0x33333333);
3187     c = ((c >> 4) + c) & 0x0F0F0F0F;
3188     c = ((c >> 8) + c) & 0x00FF00FF;
3189     c = ((c >> 16) + c) & 0x0000FFFF;
3190     return c;
3191 #endif
3192 }
3193 
VmaBitScanLSB(uint64_t mask)3194 static inline uint8_t VmaBitScanLSB(uint64_t mask)
3195 {
3196 #if defined(_MSC_VER) && defined(_WIN64)
3197     unsigned long pos;
3198     if (_BitScanForward64(&pos, mask))
3199         return static_cast<uint8_t>(pos);
3200     return UINT8_MAX;
3201 #elif defined __GNUC__ || defined __clang__
3202     return static_cast<uint8_t>(__builtin_ffsll(mask)) - 1U;
3203 #else
3204     uint8_t pos = 0;
3205     uint64_t bit = 1;
3206     do
3207     {
3208         if (mask & bit)
3209             return pos;
3210         bit <<= 1;
3211     } while (pos++ < 63);
3212     return UINT8_MAX;
3213 #endif
3214 }
3215 
VmaBitScanLSB(uint32_t mask)3216 static inline uint8_t VmaBitScanLSB(uint32_t mask)
3217 {
3218 #ifdef _MSC_VER
3219     unsigned long pos;
3220     if (_BitScanForward(&pos, mask))
3221         return static_cast<uint8_t>(pos);
3222     return UINT8_MAX;
3223 #elif defined __GNUC__ || defined __clang__
3224     return static_cast<uint8_t>(__builtin_ffs(mask)) - 1U;
3225 #else
3226     uint8_t pos = 0;
3227     uint32_t bit = 1;
3228     do
3229     {
3230         if (mask & bit)
3231             return pos;
3232         bit <<= 1;
3233     } while (pos++ < 31);
3234     return UINT8_MAX;
3235 #endif
3236 }
3237 
VmaBitScanMSB(uint64_t mask)3238 static inline uint8_t VmaBitScanMSB(uint64_t mask)
3239 {
3240 #if defined(_MSC_VER) && defined(_WIN64)
3241     unsigned long pos;
3242     if (_BitScanReverse64(&pos, mask))
3243         return static_cast<uint8_t>(pos);
3244 #elif defined __GNUC__ || defined __clang__
3245     if (mask)
3246         return 63 - static_cast<uint8_t>(__builtin_clzll(mask));
3247 #else
3248     uint8_t pos = 63;
3249     uint64_t bit = 1ULL << 63;
3250     do
3251     {
3252         if (mask & bit)
3253             return pos;
3254         bit >>= 1;
3255     } while (pos-- > 0);
3256 #endif
3257     return UINT8_MAX;
3258 }
3259 
VmaBitScanMSB(uint32_t mask)3260 static inline uint8_t VmaBitScanMSB(uint32_t mask)
3261 {
3262 #ifdef _MSC_VER
3263     unsigned long pos;
3264     if (_BitScanReverse(&pos, mask))
3265         return static_cast<uint8_t>(pos);
3266 #elif defined __GNUC__ || defined __clang__
3267     if (mask)
3268         return 31 - static_cast<uint8_t>(__builtin_clz(mask));
3269 #else
3270     uint8_t pos = 31;
3271     uint32_t bit = 1UL << 31;
3272     do
3273     {
3274         if (mask & bit)
3275             return pos;
3276         bit >>= 1;
3277     } while (pos-- > 0);
3278 #endif
3279     return UINT8_MAX;
3280 }
3281 
3282 /*
3283 Returns true if given number is a power of two.
3284 T must be unsigned integer number or signed integer but always nonnegative.
3285 For 0 returns true.
3286 */
3287 template <typename T>
VmaIsPow2(T x)3288 inline bool VmaIsPow2(T x)
3289 {
3290     return (x & (x - 1)) == 0;
3291 }
3292 
3293 // Aligns given value up to nearest multiply of align value. For example: VmaAlignUp(11, 8) = 16.
3294 // Use types like uint32_t, uint64_t as T.
3295 template <typename T>
VmaAlignUp(T val,T alignment)3296 static inline T VmaAlignUp(T val, T alignment)
3297 {
3298     VMA_HEAVY_ASSERT(VmaIsPow2(alignment));
3299     return (val + alignment - 1) & ~(alignment - 1);
3300 }
3301 
3302 // Aligns given value down to nearest multiply of align value. For example: VmaAlignUp(11, 8) = 8.
3303 // Use types like uint32_t, uint64_t as T.
3304 template <typename T>
VmaAlignDown(T val,T alignment)3305 static inline T VmaAlignDown(T val, T alignment)
3306 {
3307     VMA_HEAVY_ASSERT(VmaIsPow2(alignment));
3308     return val & ~(alignment - 1);
3309 }
3310 
3311 // Division with mathematical rounding to nearest number.
3312 template <typename T>
VmaRoundDiv(T x,T y)3313 static inline T VmaRoundDiv(T x, T y)
3314 {
3315     return (x + (y / (T)2)) / y;
3316 }
3317 
3318 // Divide by 'y' and round up to nearest integer.
3319 template <typename T>
VmaDivideRoundingUp(T x,T y)3320 static inline T VmaDivideRoundingUp(T x, T y)
3321 {
3322     return (x + y - (T)1) / y;
3323 }
3324 
3325 // Returns smallest power of 2 greater or equal to v.
VmaNextPow2(uint32_t v)3326 static inline uint32_t VmaNextPow2(uint32_t v)
3327 {
3328     v--;
3329     v |= v >> 1;
3330     v |= v >> 2;
3331     v |= v >> 4;
3332     v |= v >> 8;
3333     v |= v >> 16;
3334     v++;
3335     return v;
3336 }
3337 
VmaNextPow2(uint64_t v)3338 static inline uint64_t VmaNextPow2(uint64_t v)
3339 {
3340     v--;
3341     v |= v >> 1;
3342     v |= v >> 2;
3343     v |= v >> 4;
3344     v |= v >> 8;
3345     v |= v >> 16;
3346     v |= v >> 32;
3347     v++;
3348     return v;
3349 }
3350 
3351 // Returns largest power of 2 less or equal to v.
VmaPrevPow2(uint32_t v)3352 static inline uint32_t VmaPrevPow2(uint32_t v)
3353 {
3354     v |= v >> 1;
3355     v |= v >> 2;
3356     v |= v >> 4;
3357     v |= v >> 8;
3358     v |= v >> 16;
3359     v = v ^ (v >> 1);
3360     return v;
3361 }
3362 
VmaPrevPow2(uint64_t v)3363 static inline uint64_t VmaPrevPow2(uint64_t v)
3364 {
3365     v |= v >> 1;
3366     v |= v >> 2;
3367     v |= v >> 4;
3368     v |= v >> 8;
3369     v |= v >> 16;
3370     v |= v >> 32;
3371     v = v ^ (v >> 1);
3372     return v;
3373 }
3374 
VmaStrIsEmpty(const char * pStr)3375 static inline bool VmaStrIsEmpty(const char* pStr)
3376 {
3377     return pStr == VMA_NULL || *pStr == '\0';
3378 }
3379 
3380 #ifndef VMA_SORT
3381 template<typename Iterator, typename Compare>
VmaQuickSortPartition(Iterator beg,Iterator end,Compare cmp)3382 Iterator VmaQuickSortPartition(Iterator beg, Iterator end, Compare cmp)
3383 {
3384     Iterator centerValue = end; --centerValue;
3385     Iterator insertIndex = beg;
3386     for (Iterator memTypeIndex = beg; memTypeIndex < centerValue; ++memTypeIndex)
3387     {
3388         if (cmp(*memTypeIndex, *centerValue))
3389         {
3390             if (insertIndex != memTypeIndex)
3391             {
3392                 VMA_SWAP(*memTypeIndex, *insertIndex);
3393             }
3394             ++insertIndex;
3395         }
3396     }
3397     if (insertIndex != centerValue)
3398     {
3399         VMA_SWAP(*insertIndex, *centerValue);
3400     }
3401     return insertIndex;
3402 }
3403 
3404 template<typename Iterator, typename Compare>
VmaQuickSort(Iterator beg,Iterator end,Compare cmp)3405 void VmaQuickSort(Iterator beg, Iterator end, Compare cmp)
3406 {
3407     if (beg < end)
3408     {
3409         Iterator it = VmaQuickSortPartition<Iterator, Compare>(beg, end, cmp);
3410         VmaQuickSort<Iterator, Compare>(beg, it, cmp);
3411         VmaQuickSort<Iterator, Compare>(it + 1, end, cmp);
3412     }
3413 }
3414 
3415 #define VMA_SORT(beg, end, cmp) VmaQuickSort(beg, end, cmp)
3416 #endif // VMA_SORT
3417 
3418 /*
3419 Returns true if two memory blocks occupy overlapping pages.
3420 ResourceA must be in less memory offset than ResourceB.
3421 
3422 Algorithm is based on "Vulkan 1.0.39 - A Specification (with all registered Vulkan extensions)"
3423 chapter 11.6 "Resource Memory Association", paragraph "Buffer-Image Granularity".
3424 */
VmaBlocksOnSamePage(VkDeviceSize resourceAOffset,VkDeviceSize resourceASize,VkDeviceSize resourceBOffset,VkDeviceSize pageSize)3425 static inline bool VmaBlocksOnSamePage(
3426     VkDeviceSize resourceAOffset,
3427     VkDeviceSize resourceASize,
3428     VkDeviceSize resourceBOffset,
3429     VkDeviceSize pageSize)
3430 {
3431     VMA_ASSERT(resourceAOffset + resourceASize <= resourceBOffset && resourceASize > 0 && pageSize > 0);
3432     VkDeviceSize resourceAEnd = resourceAOffset + resourceASize - 1;
3433     VkDeviceSize resourceAEndPage = resourceAEnd & ~(pageSize - 1);
3434     VkDeviceSize resourceBStart = resourceBOffset;
3435     VkDeviceSize resourceBStartPage = resourceBStart & ~(pageSize - 1);
3436     return resourceAEndPage == resourceBStartPage;
3437 }
3438 
3439 /*
3440 Returns true if given suballocation types could conflict and must respect
3441 VkPhysicalDeviceLimits::bufferImageGranularity. They conflict if one is buffer
3442 or linear image and another one is optimal image. If type is unknown, behave
3443 conservatively.
3444 */
VmaIsBufferImageGranularityConflict(VmaSuballocationType suballocType1,VmaSuballocationType suballocType2)3445 static inline bool VmaIsBufferImageGranularityConflict(
3446     VmaSuballocationType suballocType1,
3447     VmaSuballocationType suballocType2)
3448 {
3449     if (suballocType1 > suballocType2)
3450     {
3451         VMA_SWAP(suballocType1, suballocType2);
3452     }
3453 
3454     switch (suballocType1)
3455     {
3456     case VMA_SUBALLOCATION_TYPE_FREE:
3457         return false;
3458     case VMA_SUBALLOCATION_TYPE_UNKNOWN:
3459         return true;
3460     case VMA_SUBALLOCATION_TYPE_BUFFER:
3461         return
3462             suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
3463             suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
3464     case VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN:
3465         return
3466             suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
3467             suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR ||
3468             suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
3469     case VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR:
3470         return
3471             suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
3472     case VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL:
3473         return false;
3474     default:
3475         VMA_ASSERT(0);
3476         return true;
3477     }
3478 }
3479 
VmaWriteMagicValue(void * pData,VkDeviceSize offset)3480 static void VmaWriteMagicValue(void* pData, VkDeviceSize offset)
3481 {
3482 #if VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_DETECT_CORRUPTION
3483     uint32_t* pDst = (uint32_t*)((char*)pData + offset);
3484     const size_t numberCount = VMA_DEBUG_MARGIN / sizeof(uint32_t);
3485     for (size_t i = 0; i < numberCount; ++i, ++pDst)
3486     {
3487         *pDst = VMA_CORRUPTION_DETECTION_MAGIC_VALUE;
3488     }
3489 #else
3490     // no-op
3491 #endif
3492 }
3493 
VmaValidateMagicValue(const void * pData,VkDeviceSize offset)3494 static bool VmaValidateMagicValue(const void* pData, VkDeviceSize offset)
3495 {
3496 #if VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_DETECT_CORRUPTION
3497     const uint32_t* pSrc = (const uint32_t*)((const char*)pData + offset);
3498     const size_t numberCount = VMA_DEBUG_MARGIN / sizeof(uint32_t);
3499     for (size_t i = 0; i < numberCount; ++i, ++pSrc)
3500     {
3501         if (*pSrc != VMA_CORRUPTION_DETECTION_MAGIC_VALUE)
3502         {
3503             return false;
3504         }
3505     }
3506 #endif
3507     return true;
3508 }
3509 
3510 /*
3511 Fills structure with parameters of an example buffer to be used for transfers
3512 during GPU memory defragmentation.
3513 */
VmaFillGpuDefragmentationBufferCreateInfo(VkBufferCreateInfo & outBufCreateInfo)3514 static void VmaFillGpuDefragmentationBufferCreateInfo(VkBufferCreateInfo& outBufCreateInfo)
3515 {
3516     memset(&outBufCreateInfo, 0, sizeof(outBufCreateInfo));
3517     outBufCreateInfo.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
3518     outBufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
3519     outBufCreateInfo.size = (VkDeviceSize)VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE; // Example size.
3520 }
3521 
3522 
3523 /*
3524 Performs binary search and returns iterator to first element that is greater or
3525 equal to (key), according to comparison (cmp).
3526 
3527 Cmp should return true if first argument is less than second argument.
3528 
3529 Returned value is the found element, if present in the collection or place where
3530 new element with value (key) should be inserted.
3531 */
3532 template <typename CmpLess, typename IterT, typename KeyT>
VmaBinaryFindFirstNotLess(IterT beg,IterT end,const KeyT & key,const CmpLess & cmp)3533 static IterT VmaBinaryFindFirstNotLess(IterT beg, IterT end, const KeyT& key, const CmpLess& cmp)
3534 {
3535     size_t down = 0, up = (end - beg);
3536     while (down < up)
3537     {
3538         const size_t mid = down + (up - down) / 2;  // Overflow-safe midpoint calculation
3539         if (cmp(*(beg + mid), key))
3540         {
3541             down = mid + 1;
3542         }
3543         else
3544         {
3545             up = mid;
3546         }
3547     }
3548     return beg + down;
3549 }
3550 
3551 template<typename CmpLess, typename IterT, typename KeyT>
VmaBinaryFindSorted(const IterT & beg,const IterT & end,const KeyT & value,const CmpLess & cmp)3552 IterT VmaBinaryFindSorted(const IterT& beg, const IterT& end, const KeyT& value, const CmpLess& cmp)
3553 {
3554     IterT it = VmaBinaryFindFirstNotLess<CmpLess, IterT, KeyT>(
3555         beg, end, value, cmp);
3556     if (it == end ||
3557         (!cmp(*it, value) && !cmp(value, *it)))
3558     {
3559         return it;
3560     }
3561     return end;
3562 }
3563 
3564 /*
3565 Returns true if all pointers in the array are not-null and unique.
3566 Warning! O(n^2) complexity. Use only inside VMA_HEAVY_ASSERT.
3567 T must be pointer type, e.g. VmaAllocation, VmaPool.
3568 */
3569 template<typename T>
VmaValidatePointerArray(uint32_t count,const T * arr)3570 static bool VmaValidatePointerArray(uint32_t count, const T* arr)
3571 {
3572     for (uint32_t i = 0; i < count; ++i)
3573     {
3574         const T iPtr = arr[i];
3575         if (iPtr == VMA_NULL)
3576         {
3577             return false;
3578         }
3579         for (uint32_t j = i + 1; j < count; ++j)
3580         {
3581             if (iPtr == arr[j])
3582             {
3583                 return false;
3584             }
3585         }
3586     }
3587     return true;
3588 }
3589 
3590 template<typename MainT, typename NewT>
VmaPnextChainPushFront(MainT * mainStruct,NewT * newStruct)3591 static inline void VmaPnextChainPushFront(MainT* mainStruct, NewT* newStruct)
3592 {
3593     newStruct->pNext = mainStruct->pNext;
3594     mainStruct->pNext = newStruct;
3595 }
3596 
3597 // This is the main algorithm that guides the selection of a memory type best for an allocation -
3598 // converts usage to required/preferred/not preferred flags.
FindMemoryPreferences(bool isIntegratedGPU,const VmaAllocationCreateInfo & allocCreateInfo,VkFlags bufImgUsage,VkMemoryPropertyFlags & outRequiredFlags,VkMemoryPropertyFlags & outPreferredFlags,VkMemoryPropertyFlags & outNotPreferredFlags)3599 static bool FindMemoryPreferences(
3600     bool isIntegratedGPU,
3601     const VmaAllocationCreateInfo& allocCreateInfo,
3602     VkFlags bufImgUsage, // VkBufferCreateInfo::usage or VkImageCreateInfo::usage. UINT32_MAX if unknown.
3603     VkMemoryPropertyFlags& outRequiredFlags,
3604     VkMemoryPropertyFlags& outPreferredFlags,
3605     VkMemoryPropertyFlags& outNotPreferredFlags)
3606 {
3607     outRequiredFlags = allocCreateInfo.requiredFlags;
3608     outPreferredFlags = allocCreateInfo.preferredFlags;
3609     outNotPreferredFlags = 0;
3610 
3611     switch(allocCreateInfo.usage)
3612     {
3613     case VMA_MEMORY_USAGE_UNKNOWN:
3614         break;
3615     case VMA_MEMORY_USAGE_GPU_ONLY:
3616         if(!isIntegratedGPU || (outPreferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
3617         {
3618             outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
3619         }
3620         break;
3621     case VMA_MEMORY_USAGE_CPU_ONLY:
3622         outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
3623         break;
3624     case VMA_MEMORY_USAGE_CPU_TO_GPU:
3625         outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
3626         if(!isIntegratedGPU || (outPreferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
3627         {
3628             outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
3629         }
3630         break;
3631     case VMA_MEMORY_USAGE_GPU_TO_CPU:
3632         outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
3633         outPreferredFlags |= VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
3634         break;
3635     case VMA_MEMORY_USAGE_CPU_COPY:
3636         outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
3637         break;
3638     case VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED:
3639         outRequiredFlags |= VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT;
3640         break;
3641     case VMA_MEMORY_USAGE_AUTO:
3642     case VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE:
3643     case VMA_MEMORY_USAGE_AUTO_PREFER_HOST:
3644     {
3645         if(bufImgUsage == UINT32_MAX)
3646         {
3647             VMA_ASSERT(0 && "VMA_MEMORY_USAGE_AUTO* values can only be used with functions like vmaCreateBuffer, vmaCreateImage so that the details of the created resource are known.");
3648             return false;
3649         }
3650         // This relies on values of VK_IMAGE_USAGE_TRANSFER* being the same VK_BUFFER_IMAGE_TRANSFER*.
3651         const bool deviceAccess = (bufImgUsage & ~(VK_BUFFER_USAGE_TRANSFER_DST_BIT | VK_BUFFER_USAGE_TRANSFER_SRC_BIT)) != 0;
3652         const bool hostAccessSequentialWrite = (allocCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT) != 0;
3653         const bool hostAccessRandom = (allocCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT) != 0;
3654         const bool hostAccessAllowTransferInstead = (allocCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT) != 0;
3655         const bool preferDevice = allocCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE;
3656         const bool preferHost = allocCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_HOST;
3657 
3658         // CPU random access - e.g. a buffer written to or transferred from GPU to read back on CPU.
3659         if(hostAccessRandom)
3660         {
3661             if(!isIntegratedGPU && deviceAccess && hostAccessAllowTransferInstead && !preferHost)
3662             {
3663                 // Nice if it will end up in HOST_VISIBLE, but more importantly prefer DEVICE_LOCAL.
3664                 // Omitting HOST_VISIBLE here is intentional.
3665                 // In case there is DEVICE_LOCAL | HOST_VISIBLE | HOST_CACHED, it will pick that one.
3666                 // Otherwise, this will give same weight to DEVICE_LOCAL as HOST_VISIBLE | HOST_CACHED and select the former if occurs first on the list.
3667                 outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
3668             }
3669             else
3670             {
3671                 // Always CPU memory, cached.
3672                 outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
3673             }
3674         }
3675         // CPU sequential write - may be CPU or host-visible GPU memory, uncached and write-combined.
3676         else if(hostAccessSequentialWrite)
3677         {
3678             // Want uncached and write-combined.
3679             outNotPreferredFlags |= VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
3680 
3681             if(!isIntegratedGPU && deviceAccess && hostAccessAllowTransferInstead && !preferHost)
3682             {
3683                 outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT | VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
3684             }
3685             else
3686             {
3687                 outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
3688                 // Direct GPU access, CPU sequential write (e.g. a dynamic uniform buffer updated every frame)
3689                 if(deviceAccess)
3690                 {
3691                     // Could go to CPU memory or GPU BAR/unified. Up to the user to decide. If no preference, choose GPU memory.
3692                     if(preferHost)
3693                         outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
3694                     else
3695                         outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
3696                 }
3697                 // GPU no direct access, CPU sequential write (e.g. an upload buffer to be transferred to the GPU)
3698                 else
3699                 {
3700                     // Could go to CPU memory or GPU BAR/unified. Up to the user to decide. If no preference, choose CPU memory.
3701                     if(preferDevice)
3702                         outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
3703                     else
3704                         outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
3705                 }
3706             }
3707         }
3708         // No CPU access
3709         else
3710         {
3711             // GPU access, no CPU access (e.g. a color attachment image) - prefer GPU memory
3712             if(deviceAccess)
3713             {
3714                 // ...unless there is a clear preference from the user not to do so.
3715                 if(preferHost)
3716                     outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
3717                 else
3718                     outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
3719             }
3720             // No direct GPU access, no CPU access, just transfers.
3721             // It may be staging copy intended for e.g. preserving image for next frame (then better GPU memory) or
3722             // a "swap file" copy to free some GPU memory (then better CPU memory).
3723             // Up to the user to decide. If no preferece, assume the former and choose GPU memory.
3724             if(preferHost)
3725                 outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
3726             else
3727                 outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
3728         }
3729         break;
3730     }
3731     default:
3732         VMA_ASSERT(0);
3733     }
3734 
3735     // Avoid DEVICE_COHERENT unless explicitly requested.
3736     if(((allocCreateInfo.requiredFlags | allocCreateInfo.preferredFlags) &
3737         (VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY | VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY)) == 0)
3738     {
3739         outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY;
3740     }
3741 
3742     return true;
3743 }
3744 
3745 ////////////////////////////////////////////////////////////////////////////////
3746 // Memory allocation
3747 
VmaMalloc(const VkAllocationCallbacks * pAllocationCallbacks,size_t size,size_t alignment)3748 static void* VmaMalloc(const VkAllocationCallbacks* pAllocationCallbacks, size_t size, size_t alignment)
3749 {
3750     void* result = VMA_NULL;
3751     if ((pAllocationCallbacks != VMA_NULL) &&
3752         (pAllocationCallbacks->pfnAllocation != VMA_NULL))
3753     {
3754         result = (*pAllocationCallbacks->pfnAllocation)(
3755             pAllocationCallbacks->pUserData,
3756             size,
3757             alignment,
3758             VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
3759     }
3760     else
3761     {
3762         result = VMA_SYSTEM_ALIGNED_MALLOC(size, alignment);
3763     }
3764     VMA_ASSERT(result != VMA_NULL && "CPU memory allocation failed.");
3765     return result;
3766 }
3767 
VmaFree(const VkAllocationCallbacks * pAllocationCallbacks,void * ptr)3768 static void VmaFree(const VkAllocationCallbacks* pAllocationCallbacks, void* ptr)
3769 {
3770     if ((pAllocationCallbacks != VMA_NULL) &&
3771         (pAllocationCallbacks->pfnFree != VMA_NULL))
3772     {
3773         (*pAllocationCallbacks->pfnFree)(pAllocationCallbacks->pUserData, ptr);
3774     }
3775     else
3776     {
3777         VMA_SYSTEM_ALIGNED_FREE(ptr);
3778     }
3779 }
3780 
3781 template<typename T>
VmaAllocate(const VkAllocationCallbacks * pAllocationCallbacks)3782 static T* VmaAllocate(const VkAllocationCallbacks* pAllocationCallbacks)
3783 {
3784     return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T), VMA_ALIGN_OF(T));
3785 }
3786 
3787 template<typename T>
VmaAllocateArray(const VkAllocationCallbacks * pAllocationCallbacks,size_t count)3788 static T* VmaAllocateArray(const VkAllocationCallbacks* pAllocationCallbacks, size_t count)
3789 {
3790     return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T) * count, VMA_ALIGN_OF(T));
3791 }
3792 
3793 #define vma_new(allocator, type)   new(VmaAllocate<type>(allocator))(type)
3794 
3795 #define vma_new_array(allocator, type, count)   new(VmaAllocateArray<type>((allocator), (count)))(type)
3796 
3797 template<typename T>
vma_delete(const VkAllocationCallbacks * pAllocationCallbacks,T * ptr)3798 static void vma_delete(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr)
3799 {
3800     ptr->~T();
3801     VmaFree(pAllocationCallbacks, ptr);
3802 }
3803 
3804 template<typename T>
vma_delete_array(const VkAllocationCallbacks * pAllocationCallbacks,T * ptr,size_t count)3805 static void vma_delete_array(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr, size_t count)
3806 {
3807     if (ptr != VMA_NULL)
3808     {
3809         for (size_t i = count; i--; )
3810         {
3811             ptr[i].~T();
3812         }
3813         VmaFree(pAllocationCallbacks, ptr);
3814     }
3815 }
3816 
VmaCreateStringCopy(const VkAllocationCallbacks * allocs,const char * srcStr)3817 static char* VmaCreateStringCopy(const VkAllocationCallbacks* allocs, const char* srcStr)
3818 {
3819     if (srcStr != VMA_NULL)
3820     {
3821         const size_t len = strlen(srcStr);
3822         char* const result = vma_new_array(allocs, char, len + 1);
3823         memcpy(result, srcStr, len + 1);
3824         return result;
3825     }
3826     return VMA_NULL;
3827 }
3828 
3829 #if VMA_STATS_STRING_ENABLED
VmaCreateStringCopy(const VkAllocationCallbacks * allocs,const char * srcStr,size_t strLen)3830 static char* VmaCreateStringCopy(const VkAllocationCallbacks* allocs, const char* srcStr, size_t strLen)
3831 {
3832     if (srcStr != VMA_NULL)
3833     {
3834         char* const result = vma_new_array(allocs, char, strLen + 1);
3835         memcpy(result, srcStr, strLen);
3836         result[strLen] = '\0';
3837         return result;
3838     }
3839     return VMA_NULL;
3840 }
3841 #endif // VMA_STATS_STRING_ENABLED
3842 
VmaFreeString(const VkAllocationCallbacks * allocs,char * str)3843 static void VmaFreeString(const VkAllocationCallbacks* allocs, char* str)
3844 {
3845     if (str != VMA_NULL)
3846     {
3847         const size_t len = strlen(str);
3848         vma_delete_array(allocs, str, len + 1);
3849     }
3850 }
3851 
3852 template<typename CmpLess, typename VectorT>
VmaVectorInsertSorted(VectorT & vector,const typename VectorT::value_type & value)3853 size_t VmaVectorInsertSorted(VectorT& vector, const typename VectorT::value_type& value)
3854 {
3855     const size_t indexToInsert = VmaBinaryFindFirstNotLess(
3856         vector.data(),
3857         vector.data() + vector.size(),
3858         value,
3859         CmpLess()) - vector.data();
3860     VmaVectorInsert(vector, indexToInsert, value);
3861     return indexToInsert;
3862 }
3863 
3864 template<typename CmpLess, typename VectorT>
VmaVectorRemoveSorted(VectorT & vector,const typename VectorT::value_type & value)3865 bool VmaVectorRemoveSorted(VectorT& vector, const typename VectorT::value_type& value)
3866 {
3867     CmpLess comparator;
3868     typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
3869         vector.begin(),
3870         vector.end(),
3871         value,
3872         comparator);
3873     if ((it != vector.end()) && !comparator(*it, value) && !comparator(value, *it))
3874     {
3875         size_t indexToRemove = it - vector.begin();
3876         VmaVectorRemove(vector, indexToRemove);
3877         return true;
3878     }
3879     return false;
3880 }
3881 #endif // _VMA_FUNCTIONS
3882 
3883 #ifndef _VMA_STATISTICS_FUNCTIONS
3884 
VmaClearStatistics(VmaStatistics & outStats)3885 static void VmaClearStatistics(VmaStatistics& outStats)
3886 {
3887     outStats.blockCount = 0;
3888     outStats.allocationCount = 0;
3889     outStats.blockBytes = 0;
3890     outStats.allocationBytes = 0;
3891 }
3892 
VmaAddStatistics(VmaStatistics & inoutStats,const VmaStatistics & src)3893 static void VmaAddStatistics(VmaStatistics& inoutStats, const VmaStatistics& src)
3894 {
3895     inoutStats.blockCount += src.blockCount;
3896     inoutStats.allocationCount += src.allocationCount;
3897     inoutStats.blockBytes += src.blockBytes;
3898     inoutStats.allocationBytes += src.allocationBytes;
3899 }
3900 
VmaClearDetailedStatistics(VmaDetailedStatistics & outStats)3901 static void VmaClearDetailedStatistics(VmaDetailedStatistics& outStats)
3902 {
3903     VmaClearStatistics(outStats.statistics);
3904     outStats.unusedRangeCount = 0;
3905     outStats.allocationSizeMin = VK_WHOLE_SIZE;
3906     outStats.allocationSizeMax = 0;
3907     outStats.unusedRangeSizeMin = VK_WHOLE_SIZE;
3908     outStats.unusedRangeSizeMax = 0;
3909 }
3910 
VmaAddDetailedStatisticsAllocation(VmaDetailedStatistics & inoutStats,VkDeviceSize size)3911 static void VmaAddDetailedStatisticsAllocation(VmaDetailedStatistics& inoutStats, VkDeviceSize size)
3912 {
3913     inoutStats.statistics.allocationCount++;
3914     inoutStats.statistics.allocationBytes += size;
3915     inoutStats.allocationSizeMin = VMA_MIN(inoutStats.allocationSizeMin, size);
3916     inoutStats.allocationSizeMax = VMA_MAX(inoutStats.allocationSizeMax, size);
3917 }
3918 
VmaAddDetailedStatisticsUnusedRange(VmaDetailedStatistics & inoutStats,VkDeviceSize size)3919 static void VmaAddDetailedStatisticsUnusedRange(VmaDetailedStatistics& inoutStats, VkDeviceSize size)
3920 {
3921     inoutStats.unusedRangeCount++;
3922     inoutStats.unusedRangeSizeMin = VMA_MIN(inoutStats.unusedRangeSizeMin, size);
3923     inoutStats.unusedRangeSizeMax = VMA_MAX(inoutStats.unusedRangeSizeMax, size);
3924 }
3925 
VmaAddDetailedStatistics(VmaDetailedStatistics & inoutStats,const VmaDetailedStatistics & src)3926 static void VmaAddDetailedStatistics(VmaDetailedStatistics& inoutStats, const VmaDetailedStatistics& src)
3927 {
3928     VmaAddStatistics(inoutStats.statistics, src.statistics);
3929     inoutStats.unusedRangeCount += src.unusedRangeCount;
3930     inoutStats.allocationSizeMin = VMA_MIN(inoutStats.allocationSizeMin, src.allocationSizeMin);
3931     inoutStats.allocationSizeMax = VMA_MAX(inoutStats.allocationSizeMax, src.allocationSizeMax);
3932     inoutStats.unusedRangeSizeMin = VMA_MIN(inoutStats.unusedRangeSizeMin, src.unusedRangeSizeMin);
3933     inoutStats.unusedRangeSizeMax = VMA_MAX(inoutStats.unusedRangeSizeMax, src.unusedRangeSizeMax);
3934 }
3935 
3936 #endif // _VMA_STATISTICS_FUNCTIONS
3937 
3938 #ifndef _VMA_MUTEX_LOCK
3939 // Helper RAII class to lock a mutex in constructor and unlock it in destructor (at the end of scope).
3940 struct VmaMutexLock
3941 {
VMA_CLASS_NO_COPYVmaMutexLock3942     VMA_CLASS_NO_COPY(VmaMutexLock)
3943 public:
3944     VmaMutexLock(VMA_MUTEX& mutex, bool useMutex = true) :
3945         m_pMutex(useMutex ? &mutex : VMA_NULL)
3946     {
3947         if (m_pMutex) { m_pMutex->Lock(); }
3948     }
~VmaMutexLockVmaMutexLock3949     ~VmaMutexLock() {  if (m_pMutex) { m_pMutex->Unlock(); } }
3950 
3951 private:
3952     VMA_MUTEX* m_pMutex;
3953 };
3954 
3955 // Helper RAII class to lock a RW mutex in constructor and unlock it in destructor (at the end of scope), for reading.
3956 struct VmaMutexLockRead
3957 {
VMA_CLASS_NO_COPYVmaMutexLockRead3958     VMA_CLASS_NO_COPY(VmaMutexLockRead)
3959 public:
3960     VmaMutexLockRead(VMA_RW_MUTEX& mutex, bool useMutex) :
3961         m_pMutex(useMutex ? &mutex : VMA_NULL)
3962     {
3963         if (m_pMutex) { m_pMutex->LockRead(); }
3964     }
~VmaMutexLockReadVmaMutexLockRead3965     ~VmaMutexLockRead() { if (m_pMutex) { m_pMutex->UnlockRead(); } }
3966 
3967 private:
3968     VMA_RW_MUTEX* m_pMutex;
3969 };
3970 
3971 // Helper RAII class to lock a RW mutex in constructor and unlock it in destructor (at the end of scope), for writing.
3972 struct VmaMutexLockWrite
3973 {
VMA_CLASS_NO_COPYVmaMutexLockWrite3974     VMA_CLASS_NO_COPY(VmaMutexLockWrite)
3975 public:
3976     VmaMutexLockWrite(VMA_RW_MUTEX& mutex, bool useMutex)
3977         : m_pMutex(useMutex ? &mutex : VMA_NULL)
3978     {
3979         if (m_pMutex) { m_pMutex->LockWrite(); }
3980     }
~VmaMutexLockWriteVmaMutexLockWrite3981     ~VmaMutexLockWrite() { if (m_pMutex) { m_pMutex->UnlockWrite(); } }
3982 
3983 private:
3984     VMA_RW_MUTEX* m_pMutex;
3985 };
3986 
3987 #if VMA_DEBUG_GLOBAL_MUTEX
3988     static VMA_MUTEX gDebugGlobalMutex;
3989     #define VMA_DEBUG_GLOBAL_MUTEX_LOCK VmaMutexLock debugGlobalMutexLock(gDebugGlobalMutex, true);
3990 #else
3991     #define VMA_DEBUG_GLOBAL_MUTEX_LOCK
3992 #endif
3993 #endif // _VMA_MUTEX_LOCK
3994 
3995 #ifndef _VMA_ATOMIC_TRANSACTIONAL_INCREMENT
3996 // An object that increments given atomic but decrements it back in the destructor unless Commit() is called.
3997 template<typename T>
3998 struct AtomicTransactionalIncrement
3999 {
4000 public:
4001     typedef std::atomic<T> AtomicT;
4002 
~AtomicTransactionalIncrementAtomicTransactionalIncrement4003     ~AtomicTransactionalIncrement()
4004     {
4005         if(m_Atomic)
4006             --(*m_Atomic);
4007     }
4008 
CommitAtomicTransactionalIncrement4009     void Commit() { m_Atomic = nullptr; }
IncrementAtomicTransactionalIncrement4010     T Increment(AtomicT* atomic)
4011     {
4012         m_Atomic = atomic;
4013         return m_Atomic->fetch_add(1);
4014     }
4015 
4016 private:
4017     AtomicT* m_Atomic = nullptr;
4018 };
4019 #endif // _VMA_ATOMIC_TRANSACTIONAL_INCREMENT
4020 
4021 #ifndef _VMA_STL_ALLOCATOR
4022 // STL-compatible allocator.
4023 template<typename T>
4024 struct VmaStlAllocator
4025 {
4026     const VkAllocationCallbacks* const m_pCallbacks;
4027     typedef T value_type;
4028 
VmaStlAllocatorVmaStlAllocator4029     VmaStlAllocator(const VkAllocationCallbacks* pCallbacks) : m_pCallbacks(pCallbacks) {}
4030     template<typename U>
VmaStlAllocatorVmaStlAllocator4031     VmaStlAllocator(const VmaStlAllocator<U>& src) : m_pCallbacks(src.m_pCallbacks) {}
4032     VmaStlAllocator(const VmaStlAllocator&) = default;
4033     VmaStlAllocator& operator=(const VmaStlAllocator&) = delete;
4034 
allocateVmaStlAllocator4035     T* allocate(size_t n) { return VmaAllocateArray<T>(m_pCallbacks, n); }
deallocateVmaStlAllocator4036     void deallocate(T* p, size_t n) { VmaFree(m_pCallbacks, p); }
4037 
4038     template<typename U>
4039     bool operator==(const VmaStlAllocator<U>& rhs) const
4040     {
4041         return m_pCallbacks == rhs.m_pCallbacks;
4042     }
4043     template<typename U>
4044     bool operator!=(const VmaStlAllocator<U>& rhs) const
4045     {
4046         return m_pCallbacks != rhs.m_pCallbacks;
4047     }
4048 };
4049 #endif // _VMA_STL_ALLOCATOR
4050 
4051 #ifndef _VMA_VECTOR
4052 /* Class with interface compatible with subset of std::vector.
4053 T must be POD because constructors and destructors are not called and memcpy is
4054 used for these objects. */
4055 template<typename T, typename AllocatorT>
4056 class VmaVector
4057 {
4058 public:
4059     typedef T value_type;
4060     typedef T* iterator;
4061     typedef const T* const_iterator;
4062 
4063     VmaVector(const AllocatorT& allocator);
4064     VmaVector(size_t count, const AllocatorT& allocator);
4065     // This version of the constructor is here for compatibility with pre-C++14 std::vector.
4066     // value is unused.
VmaVector(size_t count,const T & value,const AllocatorT & allocator)4067     VmaVector(size_t count, const T& value, const AllocatorT& allocator) : VmaVector(count, allocator) {}
4068     VmaVector(const VmaVector<T, AllocatorT>& src);
4069     VmaVector& operator=(const VmaVector& rhs);
~VmaVector()4070     ~VmaVector() { VmaFree(m_Allocator.m_pCallbacks, m_pArray); }
4071 
empty()4072     bool empty() const { return m_Count == 0; }
size()4073     size_t size() const { return m_Count; }
data()4074     T* data() { return m_pArray; }
front()4075     T& front() { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[0]; }
back()4076     T& back() { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[m_Count - 1]; }
data()4077     const T* data() const { return m_pArray; }
front()4078     const T& front() const { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[0]; }
back()4079     const T& back() const { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[m_Count - 1]; }
4080 
begin()4081     iterator begin() { return m_pArray; }
end()4082     iterator end() { return m_pArray + m_Count; }
cbegin()4083     const_iterator cbegin() const { return m_pArray; }
cend()4084     const_iterator cend() const { return m_pArray + m_Count; }
begin()4085     const_iterator begin() const { return cbegin(); }
end()4086     const_iterator end() const { return cend(); }
4087 
pop_front()4088     void pop_front() { VMA_HEAVY_ASSERT(m_Count > 0); remove(0); }
pop_back()4089     void pop_back() { VMA_HEAVY_ASSERT(m_Count > 0); resize(size() - 1); }
push_front(const T & src)4090     void push_front(const T& src) { insert(0, src); }
4091 
4092     void push_back(const T& src);
4093     void reserve(size_t newCapacity, bool freeMemory = false);
4094     void resize(size_t newCount);
clear()4095     void clear() { resize(0); }
4096     void shrink_to_fit();
4097     void insert(size_t index, const T& src);
4098     void remove(size_t index);
4099 
4100     T& operator[](size_t index) { VMA_HEAVY_ASSERT(index < m_Count); return m_pArray[index]; }
4101     const T& operator[](size_t index) const { VMA_HEAVY_ASSERT(index < m_Count); return m_pArray[index]; }
4102 
4103 private:
4104     AllocatorT m_Allocator;
4105     T* m_pArray;
4106     size_t m_Count;
4107     size_t m_Capacity;
4108 };
4109 
4110 #ifndef _VMA_VECTOR_FUNCTIONS
4111 template<typename T, typename AllocatorT>
VmaVector(const AllocatorT & allocator)4112 VmaVector<T, AllocatorT>::VmaVector(const AllocatorT& allocator)
4113     : m_Allocator(allocator),
4114     m_pArray(VMA_NULL),
4115     m_Count(0),
4116     m_Capacity(0) {}
4117 
4118 template<typename T, typename AllocatorT>
VmaVector(size_t count,const AllocatorT & allocator)4119 VmaVector<T, AllocatorT>::VmaVector(size_t count, const AllocatorT& allocator)
4120     : m_Allocator(allocator),
4121     m_pArray(count ? (T*)VmaAllocateArray<T>(allocator.m_pCallbacks, count) : VMA_NULL),
4122     m_Count(count),
4123     m_Capacity(count) {}
4124 
4125 template<typename T, typename AllocatorT>
VmaVector(const VmaVector & src)4126 VmaVector<T, AllocatorT>::VmaVector(const VmaVector& src)
4127     : m_Allocator(src.m_Allocator),
4128     m_pArray(src.m_Count ? (T*)VmaAllocateArray<T>(src.m_Allocator.m_pCallbacks, src.m_Count) : VMA_NULL),
4129     m_Count(src.m_Count),
4130     m_Capacity(src.m_Count)
4131 {
4132     if (m_Count != 0)
4133     {
4134         memcpy(m_pArray, src.m_pArray, m_Count * sizeof(T));
4135     }
4136 }
4137 
4138 template<typename T, typename AllocatorT>
4139 VmaVector<T, AllocatorT>& VmaVector<T, AllocatorT>::operator=(const VmaVector& rhs)
4140 {
4141     if (&rhs != this)
4142     {
4143         resize(rhs.m_Count);
4144         if (m_Count != 0)
4145         {
4146             memcpy(m_pArray, rhs.m_pArray, m_Count * sizeof(T));
4147         }
4148     }
4149     return *this;
4150 }
4151 
4152 template<typename T, typename AllocatorT>
push_back(const T & src)4153 void VmaVector<T, AllocatorT>::push_back(const T& src)
4154 {
4155     const size_t newIndex = size();
4156     resize(newIndex + 1);
4157     m_pArray[newIndex] = src;
4158 }
4159 
4160 template<typename T, typename AllocatorT>
reserve(size_t newCapacity,bool freeMemory)4161 void VmaVector<T, AllocatorT>::reserve(size_t newCapacity, bool freeMemory)
4162 {
4163     newCapacity = VMA_MAX(newCapacity, m_Count);
4164 
4165     if ((newCapacity < m_Capacity) && !freeMemory)
4166     {
4167         newCapacity = m_Capacity;
4168     }
4169 
4170     if (newCapacity != m_Capacity)
4171     {
4172         T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator, newCapacity) : VMA_NULL;
4173         if (m_Count != 0)
4174         {
4175             memcpy(newArray, m_pArray, m_Count * sizeof(T));
4176         }
4177         VmaFree(m_Allocator.m_pCallbacks, m_pArray);
4178         m_Capacity = newCapacity;
4179         m_pArray = newArray;
4180     }
4181 }
4182 
4183 template<typename T, typename AllocatorT>
resize(size_t newCount)4184 void VmaVector<T, AllocatorT>::resize(size_t newCount)
4185 {
4186     size_t newCapacity = m_Capacity;
4187     if (newCount > m_Capacity)
4188     {
4189         newCapacity = VMA_MAX(newCount, VMA_MAX(m_Capacity * 3 / 2, (size_t)8));
4190     }
4191 
4192     if (newCapacity != m_Capacity)
4193     {
4194         T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator.m_pCallbacks, newCapacity) : VMA_NULL;
4195         const size_t elementsToCopy = VMA_MIN(m_Count, newCount);
4196         if (elementsToCopy != 0)
4197         {
4198             memcpy(newArray, m_pArray, elementsToCopy * sizeof(T));
4199         }
4200         VmaFree(m_Allocator.m_pCallbacks, m_pArray);
4201         m_Capacity = newCapacity;
4202         m_pArray = newArray;
4203     }
4204 
4205     m_Count = newCount;
4206 }
4207 
4208 template<typename T, typename AllocatorT>
shrink_to_fit()4209 void VmaVector<T, AllocatorT>::shrink_to_fit()
4210 {
4211     if (m_Capacity > m_Count)
4212     {
4213         T* newArray = VMA_NULL;
4214         if (m_Count > 0)
4215         {
4216             newArray = VmaAllocateArray<T>(m_Allocator.m_pCallbacks, m_Count);
4217             memcpy(newArray, m_pArray, m_Count * sizeof(T));
4218         }
4219         VmaFree(m_Allocator.m_pCallbacks, m_pArray);
4220         m_Capacity = m_Count;
4221         m_pArray = newArray;
4222     }
4223 }
4224 
4225 template<typename T, typename AllocatorT>
insert(size_t index,const T & src)4226 void VmaVector<T, AllocatorT>::insert(size_t index, const T& src)
4227 {
4228     VMA_HEAVY_ASSERT(index <= m_Count);
4229     const size_t oldCount = size();
4230     resize(oldCount + 1);
4231     if (index < oldCount)
4232     {
4233         memmove(m_pArray + (index + 1), m_pArray + index, (oldCount - index) * sizeof(T));
4234     }
4235     m_pArray[index] = src;
4236 }
4237 
4238 template<typename T, typename AllocatorT>
remove(size_t index)4239 void VmaVector<T, AllocatorT>::remove(size_t index)
4240 {
4241     VMA_HEAVY_ASSERT(index < m_Count);
4242     const size_t oldCount = size();
4243     if (index < oldCount - 1)
4244     {
4245         memmove(m_pArray + index, m_pArray + (index + 1), (oldCount - index - 1) * sizeof(T));
4246     }
4247     resize(oldCount - 1);
4248 }
4249 #endif // _VMA_VECTOR_FUNCTIONS
4250 
4251 template<typename T, typename allocatorT>
VmaVectorInsert(VmaVector<T,allocatorT> & vec,size_t index,const T & item)4252 static void VmaVectorInsert(VmaVector<T, allocatorT>& vec, size_t index, const T& item)
4253 {
4254     vec.insert(index, item);
4255 }
4256 
4257 template<typename T, typename allocatorT>
VmaVectorRemove(VmaVector<T,allocatorT> & vec,size_t index)4258 static void VmaVectorRemove(VmaVector<T, allocatorT>& vec, size_t index)
4259 {
4260     vec.remove(index);
4261 }
4262 #endif // _VMA_VECTOR
4263 
4264 #ifndef _VMA_SMALL_VECTOR
4265 /*
4266 This is a vector (a variable-sized array), optimized for the case when the array is small.
4267 
4268 It contains some number of elements in-place, which allows it to avoid heap allocation
4269 when the actual number of elements is below that threshold. This allows normal "small"
4270 cases to be fast without losing generality for large inputs.
4271 */
4272 template<typename T, typename AllocatorT, size_t N>
4273 class VmaSmallVector
4274 {
4275 public:
4276     typedef T value_type;
4277     typedef T* iterator;
4278 
4279     VmaSmallVector(const AllocatorT& allocator);
4280     VmaSmallVector(size_t count, const AllocatorT& allocator);
4281     template<typename SrcT, typename SrcAllocatorT, size_t SrcN>
4282     VmaSmallVector(const VmaSmallVector<SrcT, SrcAllocatorT, SrcN>&) = delete;
4283     template<typename SrcT, typename SrcAllocatorT, size_t SrcN>
4284     VmaSmallVector<T, AllocatorT, N>& operator=(const VmaSmallVector<SrcT, SrcAllocatorT, SrcN>&) = delete;
4285     ~VmaSmallVector() = default;
4286 
empty()4287     bool empty() const { return m_Count == 0; }
size()4288     size_t size() const { return m_Count; }
data()4289     T* data() { return m_Count > N ? m_DynamicArray.data() : m_StaticArray; }
front()4290     T& front() { VMA_HEAVY_ASSERT(m_Count > 0); return data()[0]; }
back()4291     T& back() { VMA_HEAVY_ASSERT(m_Count > 0); return data()[m_Count - 1]; }
data()4292     const T* data() const { return m_Count > N ? m_DynamicArray.data() : m_StaticArray; }
front()4293     const T& front() const { VMA_HEAVY_ASSERT(m_Count > 0); return data()[0]; }
back()4294     const T& back() const { VMA_HEAVY_ASSERT(m_Count > 0); return data()[m_Count - 1]; }
4295 
begin()4296     iterator begin() { return data(); }
end()4297     iterator end() { return data() + m_Count; }
4298 
pop_front()4299     void pop_front() { VMA_HEAVY_ASSERT(m_Count > 0); remove(0); }
pop_back()4300     void pop_back() { VMA_HEAVY_ASSERT(m_Count > 0); resize(size() - 1); }
push_front(const T & src)4301     void push_front(const T& src) { insert(0, src); }
4302 
4303     void push_back(const T& src);
4304     void resize(size_t newCount, bool freeMemory = false);
4305     void clear(bool freeMemory = false);
4306     void insert(size_t index, const T& src);
4307     void remove(size_t index);
4308 
4309     T& operator[](size_t index) { VMA_HEAVY_ASSERT(index < m_Count); return data()[index]; }
4310     const T& operator[](size_t index) const { VMA_HEAVY_ASSERT(index < m_Count); return data()[index]; }
4311 
4312 private:
4313     size_t m_Count;
4314     T m_StaticArray[N]; // Used when m_Size <= N
4315     VmaVector<T, AllocatorT> m_DynamicArray; // Used when m_Size > N
4316 };
4317 
4318 #ifndef _VMA_SMALL_VECTOR_FUNCTIONS
4319 template<typename T, typename AllocatorT, size_t N>
VmaSmallVector(const AllocatorT & allocator)4320 VmaSmallVector<T, AllocatorT, N>::VmaSmallVector(const AllocatorT& allocator)
4321     : m_Count(0),
4322     m_DynamicArray(allocator) {}
4323 
4324 template<typename T, typename AllocatorT, size_t N>
VmaSmallVector(size_t count,const AllocatorT & allocator)4325 VmaSmallVector<T, AllocatorT, N>::VmaSmallVector(size_t count, const AllocatorT& allocator)
4326     : m_Count(count),
4327     m_DynamicArray(count > N ? count : 0, allocator) {}
4328 
4329 template<typename T, typename AllocatorT, size_t N>
push_back(const T & src)4330 void VmaSmallVector<T, AllocatorT, N>::push_back(const T& src)
4331 {
4332     const size_t newIndex = size();
4333     resize(newIndex + 1);
4334     data()[newIndex] = src;
4335 }
4336 
4337 template<typename T, typename AllocatorT, size_t N>
resize(size_t newCount,bool freeMemory)4338 void VmaSmallVector<T, AllocatorT, N>::resize(size_t newCount, bool freeMemory)
4339 {
4340     if (newCount > N && m_Count > N)
4341     {
4342         // Any direction, staying in m_DynamicArray
4343         m_DynamicArray.resize(newCount);
4344         if (freeMemory)
4345         {
4346             m_DynamicArray.shrink_to_fit();
4347         }
4348     }
4349     else if (newCount > N && m_Count <= N)
4350     {
4351         // Growing, moving from m_StaticArray to m_DynamicArray
4352         m_DynamicArray.resize(newCount);
4353         if (m_Count > 0)
4354         {
4355             memcpy(m_DynamicArray.data(), m_StaticArray, m_Count * sizeof(T));
4356         }
4357     }
4358     else if (newCount <= N && m_Count > N)
4359     {
4360         // Shrinking, moving from m_DynamicArray to m_StaticArray
4361         if (newCount > 0)
4362         {
4363             memcpy(m_StaticArray, m_DynamicArray.data(), newCount * sizeof(T));
4364         }
4365         m_DynamicArray.resize(0);
4366         if (freeMemory)
4367         {
4368             m_DynamicArray.shrink_to_fit();
4369         }
4370     }
4371     else
4372     {
4373         // Any direction, staying in m_StaticArray - nothing to do here
4374     }
4375     m_Count = newCount;
4376 }
4377 
4378 template<typename T, typename AllocatorT, size_t N>
clear(bool freeMemory)4379 void VmaSmallVector<T, AllocatorT, N>::clear(bool freeMemory)
4380 {
4381     m_DynamicArray.clear();
4382     if (freeMemory)
4383     {
4384         m_DynamicArray.shrink_to_fit();
4385     }
4386     m_Count = 0;
4387 }
4388 
4389 template<typename T, typename AllocatorT, size_t N>
insert(size_t index,const T & src)4390 void VmaSmallVector<T, AllocatorT, N>::insert(size_t index, const T& src)
4391 {
4392     VMA_HEAVY_ASSERT(index <= m_Count);
4393     const size_t oldCount = size();
4394     resize(oldCount + 1);
4395     T* const dataPtr = data();
4396     if (index < oldCount)
4397     {
4398         //  I know, this could be more optimal for case where memmove can be memcpy directly from m_StaticArray to m_DynamicArray.
4399         memmove(dataPtr + (index + 1), dataPtr + index, (oldCount - index) * sizeof(T));
4400     }
4401     dataPtr[index] = src;
4402 }
4403 
4404 template<typename T, typename AllocatorT, size_t N>
remove(size_t index)4405 void VmaSmallVector<T, AllocatorT, N>::remove(size_t index)
4406 {
4407     VMA_HEAVY_ASSERT(index < m_Count);
4408     const size_t oldCount = size();
4409     if (index < oldCount - 1)
4410     {
4411         //  I know, this could be more optimal for case where memmove can be memcpy directly from m_DynamicArray to m_StaticArray.
4412         T* const dataPtr = data();
4413         memmove(dataPtr + index, dataPtr + (index + 1), (oldCount - index - 1) * sizeof(T));
4414     }
4415     resize(oldCount - 1);
4416 }
4417 #endif // _VMA_SMALL_VECTOR_FUNCTIONS
4418 #endif // _VMA_SMALL_VECTOR
4419 
4420 #ifndef _VMA_POOL_ALLOCATOR
4421 /*
4422 Allocator for objects of type T using a list of arrays (pools) to speed up
4423 allocation. Number of elements that can be allocated is not bounded because
4424 allocator can create multiple blocks.
4425 */
4426 template<typename T>
4427 class VmaPoolAllocator
4428 {
4429     VMA_CLASS_NO_COPY(VmaPoolAllocator)
4430 public:
4431     VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, uint32_t firstBlockCapacity);
4432     ~VmaPoolAllocator();
4433     template<typename... Types> T* Alloc(Types&&... args);
4434     void Free(T* ptr);
4435 
4436 private:
4437     union Item
4438     {
4439         uint32_t NextFreeIndex;
4440         alignas(T) char Value[sizeof(T)];
4441     };
4442     struct ItemBlock
4443     {
4444         Item* pItems;
4445         uint32_t Capacity;
4446         uint32_t FirstFreeIndex;
4447     };
4448 
4449     const VkAllocationCallbacks* m_pAllocationCallbacks;
4450     const uint32_t m_FirstBlockCapacity;
4451     VmaVector<ItemBlock, VmaStlAllocator<ItemBlock>> m_ItemBlocks;
4452 
4453     ItemBlock& CreateNewBlock();
4454 };
4455 
4456 #ifndef _VMA_POOL_ALLOCATOR_FUNCTIONS
4457 template<typename T>
VmaPoolAllocator(const VkAllocationCallbacks * pAllocationCallbacks,uint32_t firstBlockCapacity)4458 VmaPoolAllocator<T>::VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, uint32_t firstBlockCapacity)
4459     : m_pAllocationCallbacks(pAllocationCallbacks),
4460     m_FirstBlockCapacity(firstBlockCapacity),
4461     m_ItemBlocks(VmaStlAllocator<ItemBlock>(pAllocationCallbacks))
4462 {
4463     VMA_ASSERT(m_FirstBlockCapacity > 1);
4464 }
4465 
4466 template<typename T>
~VmaPoolAllocator()4467 VmaPoolAllocator<T>::~VmaPoolAllocator()
4468 {
4469     for (size_t i = m_ItemBlocks.size(); i--;)
4470         vma_delete_array(m_pAllocationCallbacks, m_ItemBlocks[i].pItems, m_ItemBlocks[i].Capacity);
4471     m_ItemBlocks.clear();
4472 }
4473 
4474 template<typename T>
Alloc(Types &&...args)4475 template<typename... Types> T* VmaPoolAllocator<T>::Alloc(Types&&... args)
4476 {
4477     for (size_t i = m_ItemBlocks.size(); i--; )
4478     {
4479         ItemBlock& block = m_ItemBlocks[i];
4480         // This block has some free items: Use first one.
4481         if (block.FirstFreeIndex != UINT32_MAX)
4482         {
4483             Item* const pItem = &block.pItems[block.FirstFreeIndex];
4484             block.FirstFreeIndex = pItem->NextFreeIndex;
4485             T* result = (T*)&pItem->Value;
4486             new(result)T(std::forward<Types>(args)...); // Explicit constructor call.
4487             return result;
4488         }
4489     }
4490 
4491     // No block has free item: Create new one and use it.
4492     ItemBlock& newBlock = CreateNewBlock();
4493     Item* const pItem = &newBlock.pItems[0];
4494     newBlock.FirstFreeIndex = pItem->NextFreeIndex;
4495     T* result = (T*)&pItem->Value;
4496     new(result) T(std::forward<Types>(args)...); // Explicit constructor call.
4497     return result;
4498 }
4499 
4500 template<typename T>
Free(T * ptr)4501 void VmaPoolAllocator<T>::Free(T* ptr)
4502 {
4503     // Search all memory blocks to find ptr.
4504     for (size_t i = m_ItemBlocks.size(); i--; )
4505     {
4506         ItemBlock& block = m_ItemBlocks[i];
4507 
4508         // Casting to union.
4509         Item* pItemPtr;
4510         memcpy(&pItemPtr, &ptr, sizeof(pItemPtr));
4511 
4512         // Check if pItemPtr is in address range of this block.
4513         if ((pItemPtr >= block.pItems) && (pItemPtr < block.pItems + block.Capacity))
4514         {
4515             ptr->~T(); // Explicit destructor call.
4516             const uint32_t index = static_cast<uint32_t>(pItemPtr - block.pItems);
4517             pItemPtr->NextFreeIndex = block.FirstFreeIndex;
4518             block.FirstFreeIndex = index;
4519             return;
4520         }
4521     }
4522     VMA_ASSERT(0 && "Pointer doesn't belong to this memory pool.");
4523 }
4524 
4525 template<typename T>
CreateNewBlock()4526 typename VmaPoolAllocator<T>::ItemBlock& VmaPoolAllocator<T>::CreateNewBlock()
4527 {
4528     const uint32_t newBlockCapacity = m_ItemBlocks.empty() ?
4529         m_FirstBlockCapacity : m_ItemBlocks.back().Capacity * 3 / 2;
4530 
4531     const ItemBlock newBlock =
4532     {
4533         vma_new_array(m_pAllocationCallbacks, Item, newBlockCapacity),
4534         newBlockCapacity,
4535         0
4536     };
4537 
4538     m_ItemBlocks.push_back(newBlock);
4539 
4540     // Setup singly-linked list of all free items in this block.
4541     for (uint32_t i = 0; i < newBlockCapacity - 1; ++i)
4542         newBlock.pItems[i].NextFreeIndex = i + 1;
4543     newBlock.pItems[newBlockCapacity - 1].NextFreeIndex = UINT32_MAX;
4544     return m_ItemBlocks.back();
4545 }
4546 #endif // _VMA_POOL_ALLOCATOR_FUNCTIONS
4547 #endif // _VMA_POOL_ALLOCATOR
4548 
4549 #ifndef _VMA_RAW_LIST
4550 template<typename T>
4551 struct VmaListItem
4552 {
4553     VmaListItem* pPrev;
4554     VmaListItem* pNext;
4555     T Value;
4556 };
4557 
4558 // Doubly linked list.
4559 template<typename T>
4560 class VmaRawList
4561 {
4562     VMA_CLASS_NO_COPY(VmaRawList)
4563 public:
4564     typedef VmaListItem<T> ItemType;
4565 
4566     VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks);
4567     // Intentionally not calling Clear, because that would be unnecessary
4568     // computations to return all items to m_ItemAllocator as free.
4569     ~VmaRawList() = default;
4570 
GetCount()4571     size_t GetCount() const { return m_Count; }
IsEmpty()4572     bool IsEmpty() const { return m_Count == 0; }
4573 
Front()4574     ItemType* Front() { return m_pFront; }
Back()4575     ItemType* Back() { return m_pBack; }
Front()4576     const ItemType* Front() const { return m_pFront; }
Back()4577     const ItemType* Back() const { return m_pBack; }
4578 
4579     ItemType* PushFront();
4580     ItemType* PushBack();
4581     ItemType* PushFront(const T& value);
4582     ItemType* PushBack(const T& value);
4583     void PopFront();
4584     void PopBack();
4585 
4586     // Item can be null - it means PushBack.
4587     ItemType* InsertBefore(ItemType* pItem);
4588     // Item can be null - it means PushFront.
4589     ItemType* InsertAfter(ItemType* pItem);
4590     ItemType* InsertBefore(ItemType* pItem, const T& value);
4591     ItemType* InsertAfter(ItemType* pItem, const T& value);
4592 
4593     void Clear();
4594     void Remove(ItemType* pItem);
4595 
4596 private:
4597     const VkAllocationCallbacks* const m_pAllocationCallbacks;
4598     VmaPoolAllocator<ItemType> m_ItemAllocator;
4599     ItemType* m_pFront;
4600     ItemType* m_pBack;
4601     size_t m_Count;
4602 };
4603 
4604 #ifndef _VMA_RAW_LIST_FUNCTIONS
4605 template<typename T>
VmaRawList(const VkAllocationCallbacks * pAllocationCallbacks)4606 VmaRawList<T>::VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks)
4607     : m_pAllocationCallbacks(pAllocationCallbacks),
4608     m_ItemAllocator(pAllocationCallbacks, 128),
4609     m_pFront(VMA_NULL),
4610     m_pBack(VMA_NULL),
4611     m_Count(0) {}
4612 
4613 template<typename T>
PushFront()4614 VmaListItem<T>* VmaRawList<T>::PushFront()
4615 {
4616     ItemType* const pNewItem = m_ItemAllocator.Alloc();
4617     pNewItem->pPrev = VMA_NULL;
4618     if (IsEmpty())
4619     {
4620         pNewItem->pNext = VMA_NULL;
4621         m_pFront = pNewItem;
4622         m_pBack = pNewItem;
4623         m_Count = 1;
4624     }
4625     else
4626     {
4627         pNewItem->pNext = m_pFront;
4628         m_pFront->pPrev = pNewItem;
4629         m_pFront = pNewItem;
4630         ++m_Count;
4631     }
4632     return pNewItem;
4633 }
4634 
4635 template<typename T>
PushBack()4636 VmaListItem<T>* VmaRawList<T>::PushBack()
4637 {
4638     ItemType* const pNewItem = m_ItemAllocator.Alloc();
4639     pNewItem->pNext = VMA_NULL;
4640     if(IsEmpty())
4641     {
4642         pNewItem->pPrev = VMA_NULL;
4643         m_pFront = pNewItem;
4644         m_pBack = pNewItem;
4645         m_Count = 1;
4646     }
4647     else
4648     {
4649         pNewItem->pPrev = m_pBack;
4650         m_pBack->pNext = pNewItem;
4651         m_pBack = pNewItem;
4652         ++m_Count;
4653     }
4654     return pNewItem;
4655 }
4656 
4657 template<typename T>
PushFront(const T & value)4658 VmaListItem<T>* VmaRawList<T>::PushFront(const T& value)
4659 {
4660     ItemType* const pNewItem = PushFront();
4661     pNewItem->Value = value;
4662     return pNewItem;
4663 }
4664 
4665 template<typename T>
PushBack(const T & value)4666 VmaListItem<T>* VmaRawList<T>::PushBack(const T& value)
4667 {
4668     ItemType* const pNewItem = PushBack();
4669     pNewItem->Value = value;
4670     return pNewItem;
4671 }
4672 
4673 template<typename T>
PopFront()4674 void VmaRawList<T>::PopFront()
4675 {
4676     VMA_HEAVY_ASSERT(m_Count > 0);
4677     ItemType* const pFrontItem = m_pFront;
4678     ItemType* const pNextItem = pFrontItem->pNext;
4679     if (pNextItem != VMA_NULL)
4680     {
4681         pNextItem->pPrev = VMA_NULL;
4682     }
4683     m_pFront = pNextItem;
4684     m_ItemAllocator.Free(pFrontItem);
4685     --m_Count;
4686 }
4687 
4688 template<typename T>
PopBack()4689 void VmaRawList<T>::PopBack()
4690 {
4691     VMA_HEAVY_ASSERT(m_Count > 0);
4692     ItemType* const pBackItem = m_pBack;
4693     ItemType* const pPrevItem = pBackItem->pPrev;
4694     if(pPrevItem != VMA_NULL)
4695     {
4696         pPrevItem->pNext = VMA_NULL;
4697     }
4698     m_pBack = pPrevItem;
4699     m_ItemAllocator.Free(pBackItem);
4700     --m_Count;
4701 }
4702 
4703 template<typename T>
Clear()4704 void VmaRawList<T>::Clear()
4705 {
4706     if (IsEmpty() == false)
4707     {
4708         ItemType* pItem = m_pBack;
4709         while (pItem != VMA_NULL)
4710         {
4711             ItemType* const pPrevItem = pItem->pPrev;
4712             m_ItemAllocator.Free(pItem);
4713             pItem = pPrevItem;
4714         }
4715         m_pFront = VMA_NULL;
4716         m_pBack = VMA_NULL;
4717         m_Count = 0;
4718     }
4719 }
4720 
4721 template<typename T>
Remove(ItemType * pItem)4722 void VmaRawList<T>::Remove(ItemType* pItem)
4723 {
4724     VMA_HEAVY_ASSERT(pItem != VMA_NULL);
4725     VMA_HEAVY_ASSERT(m_Count > 0);
4726 
4727     if(pItem->pPrev != VMA_NULL)
4728     {
4729         pItem->pPrev->pNext = pItem->pNext;
4730     }
4731     else
4732     {
4733         VMA_HEAVY_ASSERT(m_pFront == pItem);
4734         m_pFront = pItem->pNext;
4735     }
4736 
4737     if(pItem->pNext != VMA_NULL)
4738     {
4739         pItem->pNext->pPrev = pItem->pPrev;
4740     }
4741     else
4742     {
4743         VMA_HEAVY_ASSERT(m_pBack == pItem);
4744         m_pBack = pItem->pPrev;
4745     }
4746 
4747     m_ItemAllocator.Free(pItem);
4748     --m_Count;
4749 }
4750 
4751 template<typename T>
InsertBefore(ItemType * pItem)4752 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem)
4753 {
4754     if(pItem != VMA_NULL)
4755     {
4756         ItemType* const prevItem = pItem->pPrev;
4757         ItemType* const newItem = m_ItemAllocator.Alloc();
4758         newItem->pPrev = prevItem;
4759         newItem->pNext = pItem;
4760         pItem->pPrev = newItem;
4761         if(prevItem != VMA_NULL)
4762         {
4763             prevItem->pNext = newItem;
4764         }
4765         else
4766         {
4767             VMA_HEAVY_ASSERT(m_pFront == pItem);
4768             m_pFront = newItem;
4769         }
4770         ++m_Count;
4771         return newItem;
4772     }
4773     else
4774         return PushBack();
4775 }
4776 
4777 template<typename T>
InsertAfter(ItemType * pItem)4778 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem)
4779 {
4780     if(pItem != VMA_NULL)
4781     {
4782         ItemType* const nextItem = pItem->pNext;
4783         ItemType* const newItem = m_ItemAllocator.Alloc();
4784         newItem->pNext = nextItem;
4785         newItem->pPrev = pItem;
4786         pItem->pNext = newItem;
4787         if(nextItem != VMA_NULL)
4788         {
4789             nextItem->pPrev = newItem;
4790         }
4791         else
4792         {
4793             VMA_HEAVY_ASSERT(m_pBack == pItem);
4794             m_pBack = newItem;
4795         }
4796         ++m_Count;
4797         return newItem;
4798     }
4799     else
4800         return PushFront();
4801 }
4802 
4803 template<typename T>
InsertBefore(ItemType * pItem,const T & value)4804 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem, const T& value)
4805 {
4806     ItemType* const newItem = InsertBefore(pItem);
4807     newItem->Value = value;
4808     return newItem;
4809 }
4810 
4811 template<typename T>
InsertAfter(ItemType * pItem,const T & value)4812 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem, const T& value)
4813 {
4814     ItemType* const newItem = InsertAfter(pItem);
4815     newItem->Value = value;
4816     return newItem;
4817 }
4818 #endif // _VMA_RAW_LIST_FUNCTIONS
4819 #endif // _VMA_RAW_LIST
4820 
4821 #ifndef _VMA_LIST
4822 template<typename T, typename AllocatorT>
4823 class VmaList
4824 {
4825     VMA_CLASS_NO_COPY(VmaList)
4826 public:
4827     class reverse_iterator;
4828     class const_iterator;
4829     class const_reverse_iterator;
4830 
4831     class iterator
4832     {
4833         friend class const_iterator;
4834         friend class VmaList<T, AllocatorT>;
4835     public:
iterator()4836         iterator() :  m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
iterator(const reverse_iterator & src)4837         iterator(const reverse_iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
4838 
4839         T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
4840         T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
4841 
4842         bool operator==(const iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
4843         bool operator!=(const iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
4844 
4845         iterator operator++(int) { iterator result = *this; ++*this; return result; }
4846         iterator operator--(int) { iterator result = *this; --*this; return result; }
4847 
4848         iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pNext; return *this; }
4849         iterator& operator--();
4850 
4851     private:
4852         VmaRawList<T>* m_pList;
4853         VmaListItem<T>* m_pItem;
4854 
iterator(VmaRawList<T> * pList,VmaListItem<T> * pItem)4855         iterator(VmaRawList<T>* pList, VmaListItem<T>* pItem) : m_pList(pList),  m_pItem(pItem) {}
4856     };
4857     class reverse_iterator
4858     {
4859         friend class const_reverse_iterator;
4860         friend class VmaList<T, AllocatorT>;
4861     public:
reverse_iterator()4862         reverse_iterator() : m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
reverse_iterator(const iterator & src)4863         reverse_iterator(const iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
4864 
4865         T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
4866         T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
4867 
4868         bool operator==(const reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
4869         bool operator!=(const reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
4870 
4871         reverse_iterator operator++(int) { reverse_iterator result = *this; ++* this; return result; }
4872         reverse_iterator operator--(int) { reverse_iterator result = *this; --* this; return result; }
4873 
4874         reverse_iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pPrev; return *this; }
4875         reverse_iterator& operator--();
4876 
4877     private:
4878         VmaRawList<T>* m_pList;
4879         VmaListItem<T>* m_pItem;
4880 
reverse_iterator(VmaRawList<T> * pList,VmaListItem<T> * pItem)4881         reverse_iterator(VmaRawList<T>* pList, VmaListItem<T>* pItem) : m_pList(pList),  m_pItem(pItem) {}
4882     };
4883     class const_iterator
4884     {
4885         friend class VmaList<T, AllocatorT>;
4886     public:
const_iterator()4887         const_iterator() : m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
const_iterator(const iterator & src)4888         const_iterator(const iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
const_iterator(const reverse_iterator & src)4889         const_iterator(const reverse_iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
4890 
drop_const()4891         iterator drop_const() { return { const_cast<VmaRawList<T>*>(m_pList), const_cast<VmaListItem<T>*>(m_pItem) }; }
4892 
4893         const T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
4894         const T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
4895 
4896         bool operator==(const const_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
4897         bool operator!=(const const_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
4898 
4899         const_iterator operator++(int) { const_iterator result = *this; ++* this; return result; }
4900         const_iterator operator--(int) { const_iterator result = *this; --* this; return result; }
4901 
4902         const_iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pNext; return *this; }
4903         const_iterator& operator--();
4904 
4905     private:
4906         const VmaRawList<T>* m_pList;
4907         const VmaListItem<T>* m_pItem;
4908 
const_iterator(const VmaRawList<T> * pList,const VmaListItem<T> * pItem)4909         const_iterator(const VmaRawList<T>* pList, const VmaListItem<T>* pItem) : m_pList(pList), m_pItem(pItem) {}
4910     };
4911     class const_reverse_iterator
4912     {
4913         friend class VmaList<T, AllocatorT>;
4914     public:
const_reverse_iterator()4915         const_reverse_iterator() : m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
const_reverse_iterator(const reverse_iterator & src)4916         const_reverse_iterator(const reverse_iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
const_reverse_iterator(const iterator & src)4917         const_reverse_iterator(const iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
4918 
drop_const()4919         reverse_iterator drop_const() { return { const_cast<VmaRawList<T>*>(m_pList), const_cast<VmaListItem<T>*>(m_pItem) }; }
4920 
4921         const T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
4922         const T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
4923 
4924         bool operator==(const const_reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
4925         bool operator!=(const const_reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
4926 
4927         const_reverse_iterator operator++(int) { const_reverse_iterator result = *this; ++* this; return result; }
4928         const_reverse_iterator operator--(int) { const_reverse_iterator result = *this; --* this; return result; }
4929 
4930         const_reverse_iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pPrev; return *this; }
4931         const_reverse_iterator& operator--();
4932 
4933     private:
4934         const VmaRawList<T>* m_pList;
4935         const VmaListItem<T>* m_pItem;
4936 
const_reverse_iterator(const VmaRawList<T> * pList,const VmaListItem<T> * pItem)4937         const_reverse_iterator(const VmaRawList<T>* pList, const VmaListItem<T>* pItem) : m_pList(pList), m_pItem(pItem) {}
4938     };
4939 
VmaList(const AllocatorT & allocator)4940     VmaList(const AllocatorT& allocator) : m_RawList(allocator.m_pCallbacks) {}
4941 
empty()4942     bool empty() const { return m_RawList.IsEmpty(); }
size()4943     size_t size() const { return m_RawList.GetCount(); }
4944 
begin()4945     iterator begin() { return iterator(&m_RawList, m_RawList.Front()); }
end()4946     iterator end() { return iterator(&m_RawList, VMA_NULL); }
4947 
cbegin()4948     const_iterator cbegin() const { return const_iterator(&m_RawList, m_RawList.Front()); }
cend()4949     const_iterator cend() const { return const_iterator(&m_RawList, VMA_NULL); }
4950 
begin()4951     const_iterator begin() const { return cbegin(); }
end()4952     const_iterator end() const { return cend(); }
4953 
rbegin()4954     reverse_iterator rbegin() { return reverse_iterator(&m_RawList, m_RawList.Back()); }
rend()4955     reverse_iterator rend() { return reverse_iterator(&m_RawList, VMA_NULL); }
4956 
crbegin()4957     const_reverse_iterator crbegin() const { return const_reverse_iterator(&m_RawList, m_RawList.Back()); }
crend()4958     const_reverse_iterator crend() const { return const_reverse_iterator(&m_RawList, VMA_NULL); }
4959 
rbegin()4960     const_reverse_iterator rbegin() const { return crbegin(); }
rend()4961     const_reverse_iterator rend() const { return crend(); }
4962 
push_back(const T & value)4963     void push_back(const T& value) { m_RawList.PushBack(value); }
insert(iterator it,const T & value)4964     iterator insert(iterator it, const T& value) { return iterator(&m_RawList, m_RawList.InsertBefore(it.m_pItem, value)); }
4965 
clear()4966     void clear() { m_RawList.Clear(); }
erase(iterator it)4967     void erase(iterator it) { m_RawList.Remove(it.m_pItem); }
4968 
4969 private:
4970     VmaRawList<T> m_RawList;
4971 };
4972 
4973 #ifndef _VMA_LIST_FUNCTIONS
4974 template<typename T, typename AllocatorT>
4975 typename VmaList<T, AllocatorT>::iterator& VmaList<T, AllocatorT>::iterator::operator--()
4976 {
4977     if (m_pItem != VMA_NULL)
4978     {
4979         m_pItem = m_pItem->pPrev;
4980     }
4981     else
4982     {
4983         VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
4984         m_pItem = m_pList->Back();
4985     }
4986     return *this;
4987 }
4988 
4989 template<typename T, typename AllocatorT>
4990 typename VmaList<T, AllocatorT>::reverse_iterator& VmaList<T, AllocatorT>::reverse_iterator::operator--()
4991 {
4992     if (m_pItem != VMA_NULL)
4993     {
4994         m_pItem = m_pItem->pNext;
4995     }
4996     else
4997     {
4998         VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
4999         m_pItem = m_pList->Front();
5000     }
5001     return *this;
5002 }
5003 
5004 template<typename T, typename AllocatorT>
5005 typename VmaList<T, AllocatorT>::const_iterator& VmaList<T, AllocatorT>::const_iterator::operator--()
5006 {
5007     if (m_pItem != VMA_NULL)
5008     {
5009         m_pItem = m_pItem->pPrev;
5010     }
5011     else
5012     {
5013         VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
5014         m_pItem = m_pList->Back();
5015     }
5016     return *this;
5017 }
5018 
5019 template<typename T, typename AllocatorT>
5020 typename VmaList<T, AllocatorT>::const_reverse_iterator& VmaList<T, AllocatorT>::const_reverse_iterator::operator--()
5021 {
5022     if (m_pItem != VMA_NULL)
5023     {
5024         m_pItem = m_pItem->pNext;
5025     }
5026     else
5027     {
5028         VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
5029         m_pItem = m_pList->Back();
5030     }
5031     return *this;
5032 }
5033 #endif // _VMA_LIST_FUNCTIONS
5034 #endif // _VMA_LIST
5035 
5036 #ifndef _VMA_INTRUSIVE_LINKED_LIST
5037 /*
5038 Expected interface of ItemTypeTraits:
5039 struct MyItemTypeTraits
5040 {
5041     typedef MyItem ItemType;
5042     static ItemType* GetPrev(const ItemType* item) { return item->myPrevPtr; }
5043     static ItemType* GetNext(const ItemType* item) { return item->myNextPtr; }
5044     static ItemType*& AccessPrev(ItemType* item) { return item->myPrevPtr; }
5045     static ItemType*& AccessNext(ItemType* item) { return item->myNextPtr; }
5046 };
5047 */
5048 template<typename ItemTypeTraits>
5049 class VmaIntrusiveLinkedList
5050 {
5051 public:
5052     typedef typename ItemTypeTraits::ItemType ItemType;
GetPrev(const ItemType * item)5053     static ItemType* GetPrev(const ItemType* item) { return ItemTypeTraits::GetPrev(item); }
GetNext(const ItemType * item)5054     static ItemType* GetNext(const ItemType* item) { return ItemTypeTraits::GetNext(item); }
5055 
5056     // Movable, not copyable.
5057     VmaIntrusiveLinkedList() = default;
5058     VmaIntrusiveLinkedList(VmaIntrusiveLinkedList && src);
5059     VmaIntrusiveLinkedList(const VmaIntrusiveLinkedList&) = delete;
5060     VmaIntrusiveLinkedList& operator=(VmaIntrusiveLinkedList&& src);
5061     VmaIntrusiveLinkedList& operator=(const VmaIntrusiveLinkedList&) = delete;
~VmaIntrusiveLinkedList()5062     ~VmaIntrusiveLinkedList() { VMA_HEAVY_ASSERT(IsEmpty()); }
5063 
GetCount()5064     size_t GetCount() const { return m_Count; }
IsEmpty()5065     bool IsEmpty() const { return m_Count == 0; }
Front()5066     ItemType* Front() { return m_Front; }
Back()5067     ItemType* Back() { return m_Back; }
Front()5068     const ItemType* Front() const { return m_Front; }
Back()5069     const ItemType* Back() const { return m_Back; }
5070 
5071     void PushBack(ItemType* item);
5072     void PushFront(ItemType* item);
5073     ItemType* PopBack();
5074     ItemType* PopFront();
5075 
5076     // MyItem can be null - it means PushBack.
5077     void InsertBefore(ItemType* existingItem, ItemType* newItem);
5078     // MyItem can be null - it means PushFront.
5079     void InsertAfter(ItemType* existingItem, ItemType* newItem);
5080     void Remove(ItemType* item);
5081     void RemoveAll();
5082 
5083 private:
5084     ItemType* m_Front = VMA_NULL;
5085     ItemType* m_Back = VMA_NULL;
5086     size_t m_Count = 0;
5087 };
5088 
5089 #ifndef _VMA_INTRUSIVE_LINKED_LIST_FUNCTIONS
5090 template<typename ItemTypeTraits>
VmaIntrusiveLinkedList(VmaIntrusiveLinkedList && src)5091 VmaIntrusiveLinkedList<ItemTypeTraits>::VmaIntrusiveLinkedList(VmaIntrusiveLinkedList&& src)
5092     : m_Front(src.m_Front), m_Back(src.m_Back), m_Count(src.m_Count)
5093 {
5094     src.m_Front = src.m_Back = VMA_NULL;
5095     src.m_Count = 0;
5096 }
5097 
5098 template<typename ItemTypeTraits>
5099 VmaIntrusiveLinkedList<ItemTypeTraits>& VmaIntrusiveLinkedList<ItemTypeTraits>::operator=(VmaIntrusiveLinkedList&& src)
5100 {
5101     if (&src != this)
5102     {
5103         VMA_HEAVY_ASSERT(IsEmpty());
5104         m_Front = src.m_Front;
5105         m_Back = src.m_Back;
5106         m_Count = src.m_Count;
5107         src.m_Front = src.m_Back = VMA_NULL;
5108         src.m_Count = 0;
5109     }
5110     return *this;
5111 }
5112 
5113 template<typename ItemTypeTraits>
PushBack(ItemType * item)5114 void VmaIntrusiveLinkedList<ItemTypeTraits>::PushBack(ItemType* item)
5115 {
5116     VMA_HEAVY_ASSERT(ItemTypeTraits::GetPrev(item) == VMA_NULL && ItemTypeTraits::GetNext(item) == VMA_NULL);
5117     if (IsEmpty())
5118     {
5119         m_Front = item;
5120         m_Back = item;
5121         m_Count = 1;
5122     }
5123     else
5124     {
5125         ItemTypeTraits::AccessPrev(item) = m_Back;
5126         ItemTypeTraits::AccessNext(m_Back) = item;
5127         m_Back = item;
5128         ++m_Count;
5129     }
5130 }
5131 
5132 template<typename ItemTypeTraits>
PushFront(ItemType * item)5133 void VmaIntrusiveLinkedList<ItemTypeTraits>::PushFront(ItemType* item)
5134 {
5135     VMA_HEAVY_ASSERT(ItemTypeTraits::GetPrev(item) == VMA_NULL && ItemTypeTraits::GetNext(item) == VMA_NULL);
5136     if (IsEmpty())
5137     {
5138         m_Front = item;
5139         m_Back = item;
5140         m_Count = 1;
5141     }
5142     else
5143     {
5144         ItemTypeTraits::AccessNext(item) = m_Front;
5145         ItemTypeTraits::AccessPrev(m_Front) = item;
5146         m_Front = item;
5147         ++m_Count;
5148     }
5149 }
5150 
5151 template<typename ItemTypeTraits>
PopBack()5152 typename VmaIntrusiveLinkedList<ItemTypeTraits>::ItemType* VmaIntrusiveLinkedList<ItemTypeTraits>::PopBack()
5153 {
5154     VMA_HEAVY_ASSERT(m_Count > 0);
5155     ItemType* const backItem = m_Back;
5156     ItemType* const prevItem = ItemTypeTraits::GetPrev(backItem);
5157     if (prevItem != VMA_NULL)
5158     {
5159         ItemTypeTraits::AccessNext(prevItem) = VMA_NULL;
5160     }
5161     m_Back = prevItem;
5162     --m_Count;
5163     ItemTypeTraits::AccessPrev(backItem) = VMA_NULL;
5164     ItemTypeTraits::AccessNext(backItem) = VMA_NULL;
5165     return backItem;
5166 }
5167 
5168 template<typename ItemTypeTraits>
PopFront()5169 typename VmaIntrusiveLinkedList<ItemTypeTraits>::ItemType* VmaIntrusiveLinkedList<ItemTypeTraits>::PopFront()
5170 {
5171     VMA_HEAVY_ASSERT(m_Count > 0);
5172     ItemType* const frontItem = m_Front;
5173     ItemType* const nextItem = ItemTypeTraits::GetNext(frontItem);
5174     if (nextItem != VMA_NULL)
5175     {
5176         ItemTypeTraits::AccessPrev(nextItem) = VMA_NULL;
5177     }
5178     m_Front = nextItem;
5179     --m_Count;
5180     ItemTypeTraits::AccessPrev(frontItem) = VMA_NULL;
5181     ItemTypeTraits::AccessNext(frontItem) = VMA_NULL;
5182     return frontItem;
5183 }
5184 
5185 template<typename ItemTypeTraits>
InsertBefore(ItemType * existingItem,ItemType * newItem)5186 void VmaIntrusiveLinkedList<ItemTypeTraits>::InsertBefore(ItemType* existingItem, ItemType* newItem)
5187 {
5188     VMA_HEAVY_ASSERT(newItem != VMA_NULL && ItemTypeTraits::GetPrev(newItem) == VMA_NULL && ItemTypeTraits::GetNext(newItem) == VMA_NULL);
5189     if (existingItem != VMA_NULL)
5190     {
5191         ItemType* const prevItem = ItemTypeTraits::GetPrev(existingItem);
5192         ItemTypeTraits::AccessPrev(newItem) = prevItem;
5193         ItemTypeTraits::AccessNext(newItem) = existingItem;
5194         ItemTypeTraits::AccessPrev(existingItem) = newItem;
5195         if (prevItem != VMA_NULL)
5196         {
5197             ItemTypeTraits::AccessNext(prevItem) = newItem;
5198         }
5199         else
5200         {
5201             VMA_HEAVY_ASSERT(m_Front == existingItem);
5202             m_Front = newItem;
5203         }
5204         ++m_Count;
5205     }
5206     else
5207         PushBack(newItem);
5208 }
5209 
5210 template<typename ItemTypeTraits>
InsertAfter(ItemType * existingItem,ItemType * newItem)5211 void VmaIntrusiveLinkedList<ItemTypeTraits>::InsertAfter(ItemType* existingItem, ItemType* newItem)
5212 {
5213     VMA_HEAVY_ASSERT(newItem != VMA_NULL && ItemTypeTraits::GetPrev(newItem) == VMA_NULL && ItemTypeTraits::GetNext(newItem) == VMA_NULL);
5214     if (existingItem != VMA_NULL)
5215     {
5216         ItemType* const nextItem = ItemTypeTraits::GetNext(existingItem);
5217         ItemTypeTraits::AccessNext(newItem) = nextItem;
5218         ItemTypeTraits::AccessPrev(newItem) = existingItem;
5219         ItemTypeTraits::AccessNext(existingItem) = newItem;
5220         if (nextItem != VMA_NULL)
5221         {
5222             ItemTypeTraits::AccessPrev(nextItem) = newItem;
5223         }
5224         else
5225         {
5226             VMA_HEAVY_ASSERT(m_Back == existingItem);
5227             m_Back = newItem;
5228         }
5229         ++m_Count;
5230     }
5231     else
5232         return PushFront(newItem);
5233 }
5234 
5235 template<typename ItemTypeTraits>
Remove(ItemType * item)5236 void VmaIntrusiveLinkedList<ItemTypeTraits>::Remove(ItemType* item)
5237 {
5238     VMA_HEAVY_ASSERT(item != VMA_NULL && m_Count > 0);
5239     if (ItemTypeTraits::GetPrev(item) != VMA_NULL)
5240     {
5241         ItemTypeTraits::AccessNext(ItemTypeTraits::AccessPrev(item)) = ItemTypeTraits::GetNext(item);
5242     }
5243     else
5244     {
5245         VMA_HEAVY_ASSERT(m_Front == item);
5246         m_Front = ItemTypeTraits::GetNext(item);
5247     }
5248 
5249     if (ItemTypeTraits::GetNext(item) != VMA_NULL)
5250     {
5251         ItemTypeTraits::AccessPrev(ItemTypeTraits::AccessNext(item)) = ItemTypeTraits::GetPrev(item);
5252     }
5253     else
5254     {
5255         VMA_HEAVY_ASSERT(m_Back == item);
5256         m_Back = ItemTypeTraits::GetPrev(item);
5257     }
5258     ItemTypeTraits::AccessPrev(item) = VMA_NULL;
5259     ItemTypeTraits::AccessNext(item) = VMA_NULL;
5260     --m_Count;
5261 }
5262 
5263 template<typename ItemTypeTraits>
RemoveAll()5264 void VmaIntrusiveLinkedList<ItemTypeTraits>::RemoveAll()
5265 {
5266     if (!IsEmpty())
5267     {
5268         ItemType* item = m_Back;
5269         while (item != VMA_NULL)
5270         {
5271             ItemType* const prevItem = ItemTypeTraits::AccessPrev(item);
5272             ItemTypeTraits::AccessPrev(item) = VMA_NULL;
5273             ItemTypeTraits::AccessNext(item) = VMA_NULL;
5274             item = prevItem;
5275         }
5276         m_Front = VMA_NULL;
5277         m_Back = VMA_NULL;
5278         m_Count = 0;
5279     }
5280 }
5281 #endif // _VMA_INTRUSIVE_LINKED_LIST_FUNCTIONS
5282 #endif // _VMA_INTRUSIVE_LINKED_LIST
5283 
5284 // Unused in this version.
5285 #if 0
5286 
5287 #ifndef _VMA_PAIR
5288 template<typename T1, typename T2>
5289 struct VmaPair
5290 {
5291     T1 first;
5292     T2 second;
5293 
5294     VmaPair() : first(), second() {}
5295     VmaPair(const T1& firstSrc, const T2& secondSrc) : first(firstSrc), second(secondSrc) {}
5296 };
5297 
5298 template<typename FirstT, typename SecondT>
5299 struct VmaPairFirstLess
5300 {
5301     bool operator()(const VmaPair<FirstT, SecondT>& lhs, const VmaPair<FirstT, SecondT>& rhs) const
5302     {
5303         return lhs.first < rhs.first;
5304     }
5305     bool operator()(const VmaPair<FirstT, SecondT>& lhs, const FirstT& rhsFirst) const
5306     {
5307         return lhs.first < rhsFirst;
5308     }
5309 };
5310 #endif // _VMA_PAIR
5311 
5312 #ifndef _VMA_MAP
5313 /* Class compatible with subset of interface of std::unordered_map.
5314 KeyT, ValueT must be POD because they will be stored in VmaVector.
5315 */
5316 template<typename KeyT, typename ValueT>
5317 class VmaMap
5318 {
5319 public:
5320     typedef VmaPair<KeyT, ValueT> PairType;
5321     typedef PairType* iterator;
5322 
5323     VmaMap(const VmaStlAllocator<PairType>& allocator) : m_Vector(allocator) {}
5324 
5325     iterator begin() { return m_Vector.begin(); }
5326     iterator end() { return m_Vector.end(); }
5327     size_t size() { return m_Vector.size(); }
5328 
5329     void insert(const PairType& pair);
5330     iterator find(const KeyT& key);
5331     void erase(iterator it);
5332 
5333 private:
5334     VmaVector< PairType, VmaStlAllocator<PairType>> m_Vector;
5335 };
5336 
5337 #ifndef _VMA_MAP_FUNCTIONS
5338 template<typename KeyT, typename ValueT>
5339 void VmaMap<KeyT, ValueT>::insert(const PairType& pair)
5340 {
5341     const size_t indexToInsert = VmaBinaryFindFirstNotLess(
5342         m_Vector.data(),
5343         m_Vector.data() + m_Vector.size(),
5344         pair,
5345         VmaPairFirstLess<KeyT, ValueT>()) - m_Vector.data();
5346     VmaVectorInsert(m_Vector, indexToInsert, pair);
5347 }
5348 
5349 template<typename KeyT, typename ValueT>
5350 VmaPair<KeyT, ValueT>* VmaMap<KeyT, ValueT>::find(const KeyT& key)
5351 {
5352     PairType* it = VmaBinaryFindFirstNotLess(
5353         m_Vector.data(),
5354         m_Vector.data() + m_Vector.size(),
5355         key,
5356         VmaPairFirstLess<KeyT, ValueT>());
5357     if ((it != m_Vector.end()) && (it->first == key))
5358     {
5359         return it;
5360     }
5361     else
5362     {
5363         return m_Vector.end();
5364     }
5365 }
5366 
5367 template<typename KeyT, typename ValueT>
5368 void VmaMap<KeyT, ValueT>::erase(iterator it)
5369 {
5370     VmaVectorRemove(m_Vector, it - m_Vector.begin());
5371 }
5372 #endif // _VMA_MAP_FUNCTIONS
5373 #endif // _VMA_MAP
5374 
5375 #endif // #if 0
5376 
5377 #if !defined(_VMA_STRING_BUILDER) && VMA_STATS_STRING_ENABLED
5378 class VmaStringBuilder
5379 {
5380 public:
VmaStringBuilder(const VkAllocationCallbacks * allocationCallbacks)5381     VmaStringBuilder(const VkAllocationCallbacks* allocationCallbacks) : m_Data(VmaStlAllocator<char>(allocationCallbacks)) {}
5382     ~VmaStringBuilder() = default;
5383 
GetLength()5384     size_t GetLength() const { return m_Data.size(); }
GetData()5385     const char* GetData() const { return m_Data.data(); }
AddNewLine()5386     void AddNewLine() { Add('\n'); }
Add(char ch)5387     void Add(char ch) { m_Data.push_back(ch); }
5388 
5389     void Add(const char* pStr);
5390     void AddNumber(uint32_t num);
5391     void AddNumber(uint64_t num);
5392     void AddPointer(const void* ptr);
5393 
5394 private:
5395     VmaVector<char, VmaStlAllocator<char>> m_Data;
5396 };
5397 
5398 #ifndef _VMA_STRING_BUILDER_FUNCTIONS
Add(const char * pStr)5399 void VmaStringBuilder::Add(const char* pStr)
5400 {
5401     const size_t strLen = strlen(pStr);
5402     if (strLen > 0)
5403     {
5404         const size_t oldCount = m_Data.size();
5405         m_Data.resize(oldCount + strLen);
5406         memcpy(m_Data.data() + oldCount, pStr, strLen);
5407     }
5408 }
5409 
AddNumber(uint32_t num)5410 void VmaStringBuilder::AddNumber(uint32_t num)
5411 {
5412     char buf[11];
5413     buf[10] = '\0';
5414     char* p = &buf[10];
5415     do
5416     {
5417         *--p = '0' + (num % 10);
5418         num /= 10;
5419     } while (num);
5420     Add(p);
5421 }
5422 
AddNumber(uint64_t num)5423 void VmaStringBuilder::AddNumber(uint64_t num)
5424 {
5425     char buf[21];
5426     buf[20] = '\0';
5427     char* p = &buf[20];
5428     do
5429     {
5430         *--p = '0' + (num % 10);
5431         num /= 10;
5432     } while (num);
5433     Add(p);
5434 }
5435 
AddPointer(const void * ptr)5436 void VmaStringBuilder::AddPointer(const void* ptr)
5437 {
5438     char buf[21];
5439     VmaPtrToStr(buf, sizeof(buf), ptr);
5440     Add(buf);
5441 }
5442 #endif //_VMA_STRING_BUILDER_FUNCTIONS
5443 #endif // _VMA_STRING_BUILDER
5444 
5445 #if !defined(_VMA_JSON_WRITER) && VMA_STATS_STRING_ENABLED
5446 /*
5447 Allows to conveniently build a correct JSON document to be written to the
5448 VmaStringBuilder passed to the constructor.
5449 */
5450 class VmaJsonWriter
5451 {
5452     VMA_CLASS_NO_COPY(VmaJsonWriter)
5453 public:
5454     // sb - string builder to write the document to. Must remain alive for the whole lifetime of this object.
5455     VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb);
5456     ~VmaJsonWriter();
5457 
5458     // Begins object by writing "{".
5459     // Inside an object, you must call pairs of WriteString and a value, e.g.:
5460     // j.BeginObject(true); j.WriteString("A"); j.WriteNumber(1); j.WriteString("B"); j.WriteNumber(2); j.EndObject();
5461     // Will write: { "A": 1, "B": 2 }
5462     void BeginObject(bool singleLine = false);
5463     // Ends object by writing "}".
5464     void EndObject();
5465 
5466     // Begins array by writing "[".
5467     // Inside an array, you can write a sequence of any values.
5468     void BeginArray(bool singleLine = false);
5469     // Ends array by writing "[".
5470     void EndArray();
5471 
5472     // Writes a string value inside "".
5473     // pStr can contain any ANSI characters, including '"', new line etc. - they will be properly escaped.
5474     void WriteString(const char* pStr);
5475 
5476     // Begins writing a string value.
5477     // Call BeginString, ContinueString, ContinueString, ..., EndString instead of
5478     // WriteString to conveniently build the string content incrementally, made of
5479     // parts including numbers.
5480     void BeginString(const char* pStr = VMA_NULL);
5481     // Posts next part of an open string.
5482     void ContinueString(const char* pStr);
5483     // Posts next part of an open string. The number is converted to decimal characters.
5484     void ContinueString(uint32_t n);
5485     void ContinueString(uint64_t n);
5486     void ContinueString_Size(size_t n);
5487     // Posts next part of an open string. Pointer value is converted to characters
5488     // using "%p" formatting - shown as hexadecimal number, e.g.: 000000081276Ad00
5489     void ContinueString_Pointer(const void* ptr);
5490     // Ends writing a string value by writing '"'.
5491     void EndString(const char* pStr = VMA_NULL);
5492 
5493     // Writes a number value.
5494     void WriteNumber(uint32_t n);
5495     void WriteNumber(uint64_t n);
5496     void WriteSize(size_t n);
5497     // Writes a boolean value - false or true.
5498     void WriteBool(bool b);
5499     // Writes a null value.
5500     void WriteNull();
5501 
5502 private:
5503     enum COLLECTION_TYPE
5504     {
5505         COLLECTION_TYPE_OBJECT,
5506         COLLECTION_TYPE_ARRAY,
5507     };
5508     struct StackItem
5509     {
5510         COLLECTION_TYPE type;
5511         uint32_t valueCount;
5512         bool singleLineMode;
5513     };
5514 
5515     static const char* const INDENT;
5516 
5517     VmaStringBuilder& m_SB;
5518     VmaVector< StackItem, VmaStlAllocator<StackItem> > m_Stack;
5519     bool m_InsideString;
5520 
5521     // Write size_t for less than 64bits
WriteSize(size_t n,std::integral_constant<bool,false>)5522     void WriteSize(size_t n, std::integral_constant<bool, false>) { m_SB.AddNumber(static_cast<uint32_t>(n)); }
5523     // Write size_t for 64bits
WriteSize(size_t n,std::integral_constant<bool,true>)5524     void WriteSize(size_t n, std::integral_constant<bool, true>) { m_SB.AddNumber(static_cast<uint64_t>(n)); }
5525 
5526     void BeginValue(bool isString);
5527     void WriteIndent(bool oneLess = false);
5528 };
5529 const char* const VmaJsonWriter::INDENT = "  ";
5530 
5531 #ifndef _VMA_JSON_WRITER_FUNCTIONS
VmaJsonWriter(const VkAllocationCallbacks * pAllocationCallbacks,VmaStringBuilder & sb)5532 VmaJsonWriter::VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb)
5533     : m_SB(sb),
5534     m_Stack(VmaStlAllocator<StackItem>(pAllocationCallbacks)),
5535     m_InsideString(false) {}
5536 
~VmaJsonWriter()5537 VmaJsonWriter::~VmaJsonWriter()
5538 {
5539     VMA_ASSERT(!m_InsideString);
5540     VMA_ASSERT(m_Stack.empty());
5541 }
5542 
BeginObject(bool singleLine)5543 void VmaJsonWriter::BeginObject(bool singleLine)
5544 {
5545     VMA_ASSERT(!m_InsideString);
5546 
5547     BeginValue(false);
5548     m_SB.Add('{');
5549 
5550     StackItem item;
5551     item.type = COLLECTION_TYPE_OBJECT;
5552     item.valueCount = 0;
5553     item.singleLineMode = singleLine;
5554     m_Stack.push_back(item);
5555 }
5556 
EndObject()5557 void VmaJsonWriter::EndObject()
5558 {
5559     VMA_ASSERT(!m_InsideString);
5560 
5561     WriteIndent(true);
5562     m_SB.Add('}');
5563 
5564     VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_OBJECT);
5565     m_Stack.pop_back();
5566 }
5567 
BeginArray(bool singleLine)5568 void VmaJsonWriter::BeginArray(bool singleLine)
5569 {
5570     VMA_ASSERT(!m_InsideString);
5571 
5572     BeginValue(false);
5573     m_SB.Add('[');
5574 
5575     StackItem item;
5576     item.type = COLLECTION_TYPE_ARRAY;
5577     item.valueCount = 0;
5578     item.singleLineMode = singleLine;
5579     m_Stack.push_back(item);
5580 }
5581 
EndArray()5582 void VmaJsonWriter::EndArray()
5583 {
5584     VMA_ASSERT(!m_InsideString);
5585 
5586     WriteIndent(true);
5587     m_SB.Add(']');
5588 
5589     VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_ARRAY);
5590     m_Stack.pop_back();
5591 }
5592 
WriteString(const char * pStr)5593 void VmaJsonWriter::WriteString(const char* pStr)
5594 {
5595     BeginString(pStr);
5596     EndString();
5597 }
5598 
BeginString(const char * pStr)5599 void VmaJsonWriter::BeginString(const char* pStr)
5600 {
5601     VMA_ASSERT(!m_InsideString);
5602 
5603     BeginValue(true);
5604     m_SB.Add('"');
5605     m_InsideString = true;
5606     if (pStr != VMA_NULL && pStr[0] != '\0')
5607     {
5608         ContinueString(pStr);
5609     }
5610 }
5611 
ContinueString(const char * pStr)5612 void VmaJsonWriter::ContinueString(const char* pStr)
5613 {
5614     VMA_ASSERT(m_InsideString);
5615 
5616     const size_t strLen = strlen(pStr);
5617     for (size_t i = 0; i < strLen; ++i)
5618     {
5619         char ch = pStr[i];
5620         if (ch == '\\')
5621         {
5622             m_SB.Add("\\\\");
5623         }
5624         else if (ch == '"')
5625         {
5626             m_SB.Add("\\\"");
5627         }
5628         else if (ch >= 32)
5629         {
5630             m_SB.Add(ch);
5631         }
5632         else switch (ch)
5633         {
5634         case '\b':
5635             m_SB.Add("\\b");
5636             break;
5637         case '\f':
5638             m_SB.Add("\\f");
5639             break;
5640         case '\n':
5641             m_SB.Add("\\n");
5642             break;
5643         case '\r':
5644             m_SB.Add("\\r");
5645             break;
5646         case '\t':
5647             m_SB.Add("\\t");
5648             break;
5649         default:
5650             VMA_ASSERT(0 && "Character not currently supported.");
5651             break;
5652         }
5653     }
5654 }
5655 
ContinueString(uint32_t n)5656 void VmaJsonWriter::ContinueString(uint32_t n)
5657 {
5658     VMA_ASSERT(m_InsideString);
5659     m_SB.AddNumber(n);
5660 }
5661 
ContinueString(uint64_t n)5662 void VmaJsonWriter::ContinueString(uint64_t n)
5663 {
5664     VMA_ASSERT(m_InsideString);
5665     m_SB.AddNumber(n);
5666 }
5667 
ContinueString_Size(size_t n)5668 void VmaJsonWriter::ContinueString_Size(size_t n)
5669 {
5670     VMA_ASSERT(m_InsideString);
5671     // Fix for AppleClang incorrect type casting
5672     // TODO: Change to if constexpr when C++17 used as minimal standard
5673     WriteSize(n, std::is_same<size_t, uint64_t>{});
5674 }
5675 
ContinueString_Pointer(const void * ptr)5676 void VmaJsonWriter::ContinueString_Pointer(const void* ptr)
5677 {
5678     VMA_ASSERT(m_InsideString);
5679     m_SB.AddPointer(ptr);
5680 }
5681 
EndString(const char * pStr)5682 void VmaJsonWriter::EndString(const char* pStr)
5683 {
5684     VMA_ASSERT(m_InsideString);
5685     if (pStr != VMA_NULL && pStr[0] != '\0')
5686     {
5687         ContinueString(pStr);
5688     }
5689     m_SB.Add('"');
5690     m_InsideString = false;
5691 }
5692 
WriteNumber(uint32_t n)5693 void VmaJsonWriter::WriteNumber(uint32_t n)
5694 {
5695     VMA_ASSERT(!m_InsideString);
5696     BeginValue(false);
5697     m_SB.AddNumber(n);
5698 }
5699 
WriteNumber(uint64_t n)5700 void VmaJsonWriter::WriteNumber(uint64_t n)
5701 {
5702     VMA_ASSERT(!m_InsideString);
5703     BeginValue(false);
5704     m_SB.AddNumber(n);
5705 }
5706 
WriteSize(size_t n)5707 void VmaJsonWriter::WriteSize(size_t n)
5708 {
5709     VMA_ASSERT(!m_InsideString);
5710     BeginValue(false);
5711     // Fix for AppleClang incorrect type casting
5712     // TODO: Change to if constexpr when C++17 used as minimal standard
5713     WriteSize(n, std::is_same<size_t, uint64_t>{});
5714 }
5715 
WriteBool(bool b)5716 void VmaJsonWriter::WriteBool(bool b)
5717 {
5718     VMA_ASSERT(!m_InsideString);
5719     BeginValue(false);
5720     m_SB.Add(b ? "true" : "false");
5721 }
5722 
WriteNull()5723 void VmaJsonWriter::WriteNull()
5724 {
5725     VMA_ASSERT(!m_InsideString);
5726     BeginValue(false);
5727     m_SB.Add("null");
5728 }
5729 
BeginValue(bool isString)5730 void VmaJsonWriter::BeginValue(bool isString)
5731 {
5732     if (!m_Stack.empty())
5733     {
5734         StackItem& currItem = m_Stack.back();
5735         if (currItem.type == COLLECTION_TYPE_OBJECT &&
5736             currItem.valueCount % 2 == 0)
5737         {
5738             VMA_ASSERT(isString);
5739         }
5740 
5741         if (currItem.type == COLLECTION_TYPE_OBJECT &&
5742             currItem.valueCount % 2 != 0)
5743         {
5744             m_SB.Add(": ");
5745         }
5746         else if (currItem.valueCount > 0)
5747         {
5748             m_SB.Add(", ");
5749             WriteIndent();
5750         }
5751         else
5752         {
5753             WriteIndent();
5754         }
5755         ++currItem.valueCount;
5756     }
5757 }
5758 
WriteIndent(bool oneLess)5759 void VmaJsonWriter::WriteIndent(bool oneLess)
5760 {
5761     if (!m_Stack.empty() && !m_Stack.back().singleLineMode)
5762     {
5763         m_SB.AddNewLine();
5764 
5765         size_t count = m_Stack.size();
5766         if (count > 0 && oneLess)
5767         {
5768             --count;
5769         }
5770         for (size_t i = 0; i < count; ++i)
5771         {
5772             m_SB.Add(INDENT);
5773         }
5774     }
5775 }
5776 #endif // _VMA_JSON_WRITER_FUNCTIONS
5777 
VmaPrintDetailedStatistics(VmaJsonWriter & json,const VmaDetailedStatistics & stat)5778 static void VmaPrintDetailedStatistics(VmaJsonWriter& json, const VmaDetailedStatistics& stat)
5779 {
5780     json.BeginObject();
5781 
5782     json.WriteString("BlockCount");
5783     json.WriteNumber(stat.statistics.blockCount);
5784     json.WriteString("BlockBytes");
5785     json.WriteNumber(stat.statistics.blockBytes);
5786     json.WriteString("AllocationCount");
5787     json.WriteNumber(stat.statistics.allocationCount);
5788     json.WriteString("AllocationBytes");
5789     json.WriteNumber(stat.statistics.allocationBytes);
5790     json.WriteString("UnusedRangeCount");
5791     json.WriteNumber(stat.unusedRangeCount);
5792 
5793     if (stat.statistics.allocationCount > 1)
5794     {
5795         json.WriteString("AllocationSizeMin");
5796         json.WriteNumber(stat.allocationSizeMin);
5797         json.WriteString("AllocationSizeMax");
5798         json.WriteNumber(stat.allocationSizeMax);
5799     }
5800     if (stat.unusedRangeCount > 1)
5801     {
5802         json.WriteString("UnusedRangeSizeMin");
5803         json.WriteNumber(stat.unusedRangeSizeMin);
5804         json.WriteString("UnusedRangeSizeMax");
5805         json.WriteNumber(stat.unusedRangeSizeMax);
5806     }
5807     json.EndObject();
5808 }
5809 #endif // _VMA_JSON_WRITER
5810 
5811 #ifndef _VMA_MAPPING_HYSTERESIS
5812 
5813 class VmaMappingHysteresis
5814 {
5815     VMA_CLASS_NO_COPY(VmaMappingHysteresis)
5816 public:
5817     VmaMappingHysteresis() = default;
5818 
GetExtraMapping()5819     uint32_t GetExtraMapping() const { return m_ExtraMapping; }
5820 
5821     // Call when Map was called.
5822     // Returns true if switched to extra +1 mapping reference count.
PostMap()5823     bool PostMap()
5824     {
5825 #if VMA_MAPPING_HYSTERESIS_ENABLED
5826         if(m_ExtraMapping == 0)
5827         {
5828             ++m_MajorCounter;
5829             if(m_MajorCounter >= COUNTER_MIN_EXTRA_MAPPING)
5830             {
5831                 m_ExtraMapping = 1;
5832                 m_MajorCounter = 0;
5833                 m_MinorCounter = 0;
5834                 return true;
5835             }
5836         }
5837         else // m_ExtraMapping == 1
5838             PostMinorCounter();
5839 #endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
5840         return false;
5841     }
5842 
5843     // Call when Unmap was called.
PostUnmap()5844     void PostUnmap()
5845     {
5846 #if VMA_MAPPING_HYSTERESIS_ENABLED
5847         if(m_ExtraMapping == 0)
5848             ++m_MajorCounter;
5849         else // m_ExtraMapping == 1
5850             PostMinorCounter();
5851 #endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
5852     }
5853 
5854     // Call when allocation was made from the memory block.
PostAlloc()5855     void PostAlloc()
5856     {
5857 #if VMA_MAPPING_HYSTERESIS_ENABLED
5858         if(m_ExtraMapping == 1)
5859             ++m_MajorCounter;
5860         else // m_ExtraMapping == 0
5861             PostMinorCounter();
5862 #endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
5863     }
5864 
5865     // Call when allocation was freed from the memory block.
5866     // Returns true if switched to extra -1 mapping reference count.
PostFree()5867     bool PostFree()
5868     {
5869 #if VMA_MAPPING_HYSTERESIS_ENABLED
5870         if(m_ExtraMapping == 1)
5871         {
5872             ++m_MajorCounter;
5873             if(m_MajorCounter >= COUNTER_MIN_EXTRA_MAPPING &&
5874                 m_MajorCounter > m_MinorCounter + 1)
5875             {
5876                 m_ExtraMapping = 0;
5877                 m_MajorCounter = 0;
5878                 m_MinorCounter = 0;
5879                 return true;
5880             }
5881         }
5882         else // m_ExtraMapping == 0
5883             PostMinorCounter();
5884 #endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
5885         return false;
5886     }
5887 
5888 private:
5889     static const int32_t COUNTER_MIN_EXTRA_MAPPING = 7;
5890 
5891     uint32_t m_MinorCounter = 0;
5892     uint32_t m_MajorCounter = 0;
5893     uint32_t m_ExtraMapping = 0; // 0 or 1.
5894 
PostMinorCounter()5895     void PostMinorCounter()
5896     {
5897         if(m_MinorCounter < m_MajorCounter)
5898         {
5899             ++m_MinorCounter;
5900         }
5901         else if(m_MajorCounter > 0)
5902         {
5903             --m_MajorCounter;
5904             --m_MinorCounter;
5905         }
5906     }
5907 };
5908 
5909 #endif // _VMA_MAPPING_HYSTERESIS
5910 
5911 #ifndef _VMA_DEVICE_MEMORY_BLOCK
5912 /*
5913 Represents a single block of device memory (`VkDeviceMemory`) with all the
5914 data about its regions (aka suballocations, #VmaAllocation), assigned and free.
5915 
5916 Thread-safety:
5917 - Access to m_pMetadata must be externally synchronized.
5918 - Map, Unmap, Bind* are synchronized internally.
5919 */
5920 class VmaDeviceMemoryBlock
5921 {
5922     VMA_CLASS_NO_COPY(VmaDeviceMemoryBlock)
5923 public:
5924     VmaBlockMetadata* m_pMetadata;
5925 
5926     VmaDeviceMemoryBlock(VmaAllocator hAllocator);
5927     ~VmaDeviceMemoryBlock();
5928 
5929     // Always call after construction.
5930     void Init(
5931         VmaAllocator hAllocator,
5932         VmaPool hParentPool,
5933         uint32_t newMemoryTypeIndex,
5934         VkDeviceMemory newMemory,
5935         VkDeviceSize newSize,
5936         uint32_t id,
5937         uint32_t algorithm,
5938         VkDeviceSize bufferImageGranularity);
5939     // Always call before destruction.
5940     void Destroy(VmaAllocator allocator);
5941 
GetParentPool()5942     VmaPool GetParentPool() const { return m_hParentPool; }
GetDeviceMemory()5943     VkDeviceMemory GetDeviceMemory() const { return m_hMemory; }
GetMemoryTypeIndex()5944     uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
GetId()5945     uint32_t GetId() const { return m_Id; }
GetMappedData()5946     void* GetMappedData() const { return m_pMappedData; }
GetMapRefCount()5947     uint32_t GetMapRefCount() const { return m_MapCount; }
5948 
5949     // Call when allocation/free was made from m_pMetadata.
5950     // Used for m_MappingHysteresis.
PostAlloc()5951     void PostAlloc() { m_MappingHysteresis.PostAlloc(); }
5952     void PostFree(VmaAllocator hAllocator);
5953 
5954     // Validates all data structures inside this object. If not valid, returns false.
5955     bool Validate() const;
5956     VkResult CheckCorruption(VmaAllocator hAllocator);
5957 
5958     // ppData can be null.
5959     VkResult Map(VmaAllocator hAllocator, uint32_t count, void** ppData);
5960     void Unmap(VmaAllocator hAllocator, uint32_t count);
5961 
5962     VkResult WriteMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize);
5963     VkResult ValidateMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize);
5964 
5965     VkResult BindBufferMemory(
5966         const VmaAllocator hAllocator,
5967         const VmaAllocation hAllocation,
5968         VkDeviceSize allocationLocalOffset,
5969         VkBuffer hBuffer,
5970         const void* pNext);
5971     VkResult BindImageMemory(
5972         const VmaAllocator hAllocator,
5973         const VmaAllocation hAllocation,
5974         VkDeviceSize allocationLocalOffset,
5975         VkImage hImage,
5976         const void* pNext);
5977 
5978 private:
5979     VmaPool m_hParentPool; // VK_NULL_HANDLE if not belongs to custom pool.
5980     uint32_t m_MemoryTypeIndex;
5981     uint32_t m_Id;
5982     VkDeviceMemory m_hMemory;
5983 
5984     /*
5985     Protects access to m_hMemory so it is not used by multiple threads simultaneously, e.g. vkMapMemory, vkBindBufferMemory.
5986     Also protects m_MapCount, m_pMappedData.
5987     Allocations, deallocations, any change in m_pMetadata is protected by parent's VmaBlockVector::m_Mutex.
5988     */
5989     VMA_MUTEX m_MapAndBindMutex;
5990     VmaMappingHysteresis m_MappingHysteresis;
5991     uint32_t m_MapCount;
5992     void* m_pMappedData;
5993 };
5994 #endif // _VMA_DEVICE_MEMORY_BLOCK
5995 
5996 #ifndef _VMA_ALLOCATION_T
5997 struct VmaAllocation_T
5998 {
5999     friend struct VmaDedicatedAllocationListItemTraits;
6000 
6001     enum FLAGS
6002     {
6003         FLAG_PERSISTENT_MAP   = 0x01,
6004         FLAG_MAPPING_ALLOWED  = 0x02,
6005     };
6006 
6007 public:
6008     enum ALLOCATION_TYPE
6009     {
6010         ALLOCATION_TYPE_NONE,
6011         ALLOCATION_TYPE_BLOCK,
6012         ALLOCATION_TYPE_DEDICATED,
6013     };
6014 
6015     // This struct is allocated using VmaPoolAllocator.
6016     VmaAllocation_T(bool mappingAllowed);
6017     ~VmaAllocation_T();
6018 
6019     void InitBlockAllocation(
6020         VmaDeviceMemoryBlock* block,
6021         VmaAllocHandle allocHandle,
6022         VkDeviceSize alignment,
6023         VkDeviceSize size,
6024         uint32_t memoryTypeIndex,
6025         VmaSuballocationType suballocationType,
6026         bool mapped);
6027     // pMappedData not null means allocation is created with MAPPED flag.
6028     void InitDedicatedAllocation(
6029         VmaPool hParentPool,
6030         uint32_t memoryTypeIndex,
6031         VkDeviceMemory hMemory,
6032         VmaSuballocationType suballocationType,
6033         void* pMappedData,
6034         VkDeviceSize size);
6035 
GetTypeVmaAllocation_T6036     ALLOCATION_TYPE GetType() const { return (ALLOCATION_TYPE)m_Type; }
GetAlignmentVmaAllocation_T6037     VkDeviceSize GetAlignment() const { return m_Alignment; }
GetSizeVmaAllocation_T6038     VkDeviceSize GetSize() const { return m_Size; }
GetUserDataVmaAllocation_T6039     void* GetUserData() const { return m_pUserData; }
GetNameVmaAllocation_T6040     const char* GetName() const { return m_pName; }
GetSuballocationTypeVmaAllocation_T6041     VmaSuballocationType GetSuballocationType() const { return (VmaSuballocationType)m_SuballocationType; }
6042 
GetBlockVmaAllocation_T6043     VmaDeviceMemoryBlock* GetBlock() const { VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK); return m_BlockAllocation.m_Block; }
GetMemoryTypeIndexVmaAllocation_T6044     uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
IsPersistentMapVmaAllocation_T6045     bool IsPersistentMap() const { return (m_Flags & FLAG_PERSISTENT_MAP) != 0; }
IsMappingAllowedVmaAllocation_T6046     bool IsMappingAllowed() const { return (m_Flags & FLAG_MAPPING_ALLOWED) != 0; }
6047 
SetUserDataVmaAllocation_T6048     void SetUserData(VmaAllocator hAllocator, void* pUserData) { m_pUserData = pUserData; }
6049     void SetName(VmaAllocator hAllocator, const char* pName);
6050     void FreeName(VmaAllocator hAllocator);
6051     uint8_t SwapBlockAllocation(VmaAllocator hAllocator, VmaAllocation allocation);
6052     VmaAllocHandle GetAllocHandle() const;
6053     VkDeviceSize GetOffset() const;
6054     VmaPool GetParentPool() const;
6055     VkDeviceMemory GetMemory() const;
6056     void* GetMappedData() const;
6057 
6058     void BlockAllocMap();
6059     void BlockAllocUnmap();
6060     VkResult DedicatedAllocMap(VmaAllocator hAllocator, void** ppData);
6061     void DedicatedAllocUnmap(VmaAllocator hAllocator);
6062 
6063 #if VMA_STATS_STRING_ENABLED
GetBufferImageUsageVmaAllocation_T6064     uint32_t GetBufferImageUsage() const { return m_BufferImageUsage; }
6065 
6066     void InitBufferImageUsage(uint32_t bufferImageUsage);
6067     void PrintParameters(class VmaJsonWriter& json) const;
6068 #endif
6069 
6070 private:
6071     // Allocation out of VmaDeviceMemoryBlock.
6072     struct BlockAllocation
6073     {
6074         VmaDeviceMemoryBlock* m_Block;
6075         VmaAllocHandle m_AllocHandle;
6076     };
6077     // Allocation for an object that has its own private VkDeviceMemory.
6078     struct DedicatedAllocation
6079     {
6080         VmaPool m_hParentPool; // VK_NULL_HANDLE if not belongs to custom pool.
6081         VkDeviceMemory m_hMemory;
6082         void* m_pMappedData; // Not null means memory is mapped.
6083         VmaAllocation_T* m_Prev;
6084         VmaAllocation_T* m_Next;
6085     };
6086     union
6087     {
6088         // Allocation out of VmaDeviceMemoryBlock.
6089         BlockAllocation m_BlockAllocation;
6090         // Allocation for an object that has its own private VkDeviceMemory.
6091         DedicatedAllocation m_DedicatedAllocation;
6092     };
6093 
6094     VkDeviceSize m_Alignment;
6095     VkDeviceSize m_Size;
6096     void* m_pUserData;
6097     char* m_pName;
6098     uint32_t m_MemoryTypeIndex;
6099     uint8_t m_Type; // ALLOCATION_TYPE
6100     uint8_t m_SuballocationType; // VmaSuballocationType
6101     // Reference counter for vmaMapMemory()/vmaUnmapMemory().
6102     uint8_t m_MapCount;
6103     uint8_t m_Flags; // enum FLAGS
6104 #if VMA_STATS_STRING_ENABLED
6105     uint32_t m_BufferImageUsage; // 0 if unknown.
6106 #endif
6107 };
6108 #endif // _VMA_ALLOCATION_T
6109 
6110 #ifndef _VMA_DEDICATED_ALLOCATION_LIST_ITEM_TRAITS
6111 struct VmaDedicatedAllocationListItemTraits
6112 {
6113     typedef VmaAllocation_T ItemType;
6114 
GetPrevVmaDedicatedAllocationListItemTraits6115     static ItemType* GetPrev(const ItemType* item)
6116     {
6117         VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
6118         return item->m_DedicatedAllocation.m_Prev;
6119     }
GetNextVmaDedicatedAllocationListItemTraits6120     static ItemType* GetNext(const ItemType* item)
6121     {
6122         VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
6123         return item->m_DedicatedAllocation.m_Next;
6124     }
AccessPrevVmaDedicatedAllocationListItemTraits6125     static ItemType*& AccessPrev(ItemType* item)
6126     {
6127         VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
6128         return item->m_DedicatedAllocation.m_Prev;
6129     }
AccessNextVmaDedicatedAllocationListItemTraits6130     static ItemType*& AccessNext(ItemType* item)
6131     {
6132         VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
6133         return item->m_DedicatedAllocation.m_Next;
6134     }
6135 };
6136 #endif // _VMA_DEDICATED_ALLOCATION_LIST_ITEM_TRAITS
6137 
6138 #ifndef _VMA_DEDICATED_ALLOCATION_LIST
6139 /*
6140 Stores linked list of VmaAllocation_T objects.
6141 Thread-safe, synchronized internally.
6142 */
6143 class VmaDedicatedAllocationList
6144 {
6145 public:
VmaDedicatedAllocationList()6146     VmaDedicatedAllocationList() {}
6147     ~VmaDedicatedAllocationList();
6148 
Init(bool useMutex)6149     void Init(bool useMutex) { m_UseMutex = useMutex; }
6150     bool Validate();
6151 
6152     void AddDetailedStatistics(VmaDetailedStatistics& inoutStats);
6153     void AddStatistics(VmaStatistics& inoutStats);
6154 #if VMA_STATS_STRING_ENABLED
6155     // Writes JSON array with the list of allocations.
6156     void BuildStatsString(VmaJsonWriter& json);
6157 #endif
6158 
6159     bool IsEmpty();
6160     void Register(VmaAllocation alloc);
6161     void Unregister(VmaAllocation alloc);
6162 
6163 private:
6164     typedef VmaIntrusiveLinkedList<VmaDedicatedAllocationListItemTraits> DedicatedAllocationLinkedList;
6165 
6166     bool m_UseMutex = true;
6167     VMA_RW_MUTEX m_Mutex;
6168     DedicatedAllocationLinkedList m_AllocationList;
6169 };
6170 
6171 #ifndef _VMA_DEDICATED_ALLOCATION_LIST_FUNCTIONS
6172 
~VmaDedicatedAllocationList()6173 VmaDedicatedAllocationList::~VmaDedicatedAllocationList()
6174 {
6175     VMA_HEAVY_ASSERT(Validate());
6176 
6177     if (!m_AllocationList.IsEmpty())
6178     {
6179         VMA_ASSERT(false && "Unfreed dedicated allocations found!");
6180     }
6181 }
6182 
Validate()6183 bool VmaDedicatedAllocationList::Validate()
6184 {
6185     const size_t declaredCount = m_AllocationList.GetCount();
6186     size_t actualCount = 0;
6187     VmaMutexLockRead lock(m_Mutex, m_UseMutex);
6188     for (VmaAllocation alloc = m_AllocationList.Front();
6189         alloc != VMA_NULL; alloc = m_AllocationList.GetNext(alloc))
6190     {
6191         ++actualCount;
6192     }
6193     VMA_VALIDATE(actualCount == declaredCount);
6194 
6195     return true;
6196 }
6197 
AddDetailedStatistics(VmaDetailedStatistics & inoutStats)6198 void VmaDedicatedAllocationList::AddDetailedStatistics(VmaDetailedStatistics& inoutStats)
6199 {
6200     for(auto* item = m_AllocationList.Front(); item != nullptr; item = DedicatedAllocationLinkedList::GetNext(item))
6201     {
6202         const VkDeviceSize size = item->GetSize();
6203         inoutStats.statistics.blockCount++;
6204         inoutStats.statistics.blockBytes += size;
6205         VmaAddDetailedStatisticsAllocation(inoutStats, item->GetSize());
6206     }
6207 }
6208 
AddStatistics(VmaStatistics & inoutStats)6209 void VmaDedicatedAllocationList::AddStatistics(VmaStatistics& inoutStats)
6210 {
6211     VmaMutexLockRead lock(m_Mutex, m_UseMutex);
6212 
6213     const uint32_t allocCount = (uint32_t)m_AllocationList.GetCount();
6214     inoutStats.blockCount += allocCount;
6215     inoutStats.allocationCount += allocCount;
6216 
6217     for(auto* item = m_AllocationList.Front(); item != nullptr; item = DedicatedAllocationLinkedList::GetNext(item))
6218     {
6219         const VkDeviceSize size = item->GetSize();
6220         inoutStats.blockBytes += size;
6221         inoutStats.allocationBytes += size;
6222     }
6223 }
6224 
6225 #if VMA_STATS_STRING_ENABLED
BuildStatsString(VmaJsonWriter & json)6226 void VmaDedicatedAllocationList::BuildStatsString(VmaJsonWriter& json)
6227 {
6228     VmaMutexLockRead lock(m_Mutex, m_UseMutex);
6229     json.BeginArray();
6230     for (VmaAllocation alloc = m_AllocationList.Front();
6231         alloc != VMA_NULL; alloc = m_AllocationList.GetNext(alloc))
6232     {
6233         json.BeginObject(true);
6234         alloc->PrintParameters(json);
6235         json.EndObject();
6236     }
6237     json.EndArray();
6238 }
6239 #endif // VMA_STATS_STRING_ENABLED
6240 
IsEmpty()6241 bool VmaDedicatedAllocationList::IsEmpty()
6242 {
6243     VmaMutexLockRead lock(m_Mutex, m_UseMutex);
6244     return m_AllocationList.IsEmpty();
6245 }
6246 
Register(VmaAllocation alloc)6247 void VmaDedicatedAllocationList::Register(VmaAllocation alloc)
6248 {
6249     VmaMutexLockWrite lock(m_Mutex, m_UseMutex);
6250     m_AllocationList.PushBack(alloc);
6251 }
6252 
Unregister(VmaAllocation alloc)6253 void VmaDedicatedAllocationList::Unregister(VmaAllocation alloc)
6254 {
6255     VmaMutexLockWrite lock(m_Mutex, m_UseMutex);
6256     m_AllocationList.Remove(alloc);
6257 }
6258 #endif // _VMA_DEDICATED_ALLOCATION_LIST_FUNCTIONS
6259 #endif // _VMA_DEDICATED_ALLOCATION_LIST
6260 
6261 #ifndef _VMA_SUBALLOCATION
6262 /*
6263 Represents a region of VmaDeviceMemoryBlock that is either assigned and returned as
6264 allocated memory block or free.
6265 */
6266 struct VmaSuballocation
6267 {
6268     VkDeviceSize offset;
6269     VkDeviceSize size;
6270     void* userData;
6271     VmaSuballocationType type;
6272 };
6273 
6274 // Comparator for offsets.
6275 struct VmaSuballocationOffsetLess
6276 {
operatorVmaSuballocationOffsetLess6277     bool operator()(const VmaSuballocation& lhs, const VmaSuballocation& rhs) const
6278     {
6279         return lhs.offset < rhs.offset;
6280     }
6281 };
6282 
6283 struct VmaSuballocationOffsetGreater
6284 {
operatorVmaSuballocationOffsetGreater6285     bool operator()(const VmaSuballocation& lhs, const VmaSuballocation& rhs) const
6286     {
6287         return lhs.offset > rhs.offset;
6288     }
6289 };
6290 
6291 struct VmaSuballocationItemSizeLess
6292 {
operatorVmaSuballocationItemSizeLess6293     bool operator()(const VmaSuballocationList::iterator lhs,
6294         const VmaSuballocationList::iterator rhs) const
6295     {
6296         return lhs->size < rhs->size;
6297     }
6298 
operatorVmaSuballocationItemSizeLess6299     bool operator()(const VmaSuballocationList::iterator lhs,
6300         VkDeviceSize rhsSize) const
6301     {
6302         return lhs->size < rhsSize;
6303     }
6304 };
6305 #endif // _VMA_SUBALLOCATION
6306 
6307 #ifndef _VMA_ALLOCATION_REQUEST
6308 /*
6309 Parameters of planned allocation inside a VmaDeviceMemoryBlock.
6310 item points to a FREE suballocation.
6311 */
6312 struct VmaAllocationRequest
6313 {
6314     VmaAllocHandle allocHandle;
6315     VkDeviceSize size;
6316     VmaSuballocationList::iterator item;
6317     void* customData;
6318     uint64_t algorithmData;
6319     VmaAllocationRequestType type;
6320 };
6321 #endif // _VMA_ALLOCATION_REQUEST
6322 
6323 #ifndef _VMA_BLOCK_METADATA
6324 /*
6325 Data structure used for bookkeeping of allocations and unused ranges of memory
6326 in a single VkDeviceMemory block.
6327 */
6328 class VmaBlockMetadata
6329 {
6330 public:
6331     // pAllocationCallbacks, if not null, must be owned externally - alive and unchanged for the whole lifetime of this object.
6332     VmaBlockMetadata(const VkAllocationCallbacks* pAllocationCallbacks,
6333         VkDeviceSize bufferImageGranularity, bool isVirtual);
6334     virtual ~VmaBlockMetadata() = default;
6335 
Init(VkDeviceSize size)6336     virtual void Init(VkDeviceSize size) { m_Size = size; }
IsVirtual()6337     bool IsVirtual() const { return m_IsVirtual; }
GetSize()6338     VkDeviceSize GetSize() const { return m_Size; }
6339 
6340     // Validates all data structures inside this object. If not valid, returns false.
6341     virtual bool Validate() const = 0;
6342     virtual size_t GetAllocationCount() const = 0;
6343     virtual size_t GetFreeRegionsCount() const = 0;
6344     virtual VkDeviceSize GetSumFreeSize() const = 0;
6345     // Returns true if this block is empty - contains only single free suballocation.
6346     virtual bool IsEmpty() const = 0;
6347     virtual void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) = 0;
6348     virtual VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const = 0;
6349     virtual void* GetAllocationUserData(VmaAllocHandle allocHandle) const = 0;
6350 
6351     virtual VmaAllocHandle GetAllocationListBegin() const = 0;
6352     virtual VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const = 0;
6353     virtual VkDeviceSize GetNextFreeRegionSize(VmaAllocHandle alloc) const = 0;
6354 
6355     // Shouldn't modify blockCount.
6356     virtual void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const = 0;
6357     virtual void AddStatistics(VmaStatistics& inoutStats) const = 0;
6358 
6359 #if VMA_STATS_STRING_ENABLED
6360     virtual void PrintDetailedMap(class VmaJsonWriter& json) const = 0;
6361 #endif
6362 
6363     // Tries to find a place for suballocation with given parameters inside this block.
6364     // If succeeded, fills pAllocationRequest and returns true.
6365     // If failed, returns false.
6366     virtual bool CreateAllocationRequest(
6367         VkDeviceSize allocSize,
6368         VkDeviceSize allocAlignment,
6369         bool upperAddress,
6370         VmaSuballocationType allocType,
6371         // Always one of VMA_ALLOCATION_CREATE_STRATEGY_* or VMA_ALLOCATION_INTERNAL_STRATEGY_* flags.
6372         uint32_t strategy,
6373         VmaAllocationRequest* pAllocationRequest) = 0;
6374 
6375     virtual VkResult CheckCorruption(const void* pBlockData) = 0;
6376 
6377     // Makes actual allocation based on request. Request must already be checked and valid.
6378     virtual void Alloc(
6379         const VmaAllocationRequest& request,
6380         VmaSuballocationType type,
6381         void* userData) = 0;
6382 
6383     // Frees suballocation assigned to given memory region.
6384     virtual void Free(VmaAllocHandle allocHandle) = 0;
6385 
6386     // Frees all allocations.
6387     // Careful! Don't call it if there are VmaAllocation objects owned by userData of cleared allocations!
6388     virtual void Clear() = 0;
6389 
6390     virtual void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) = 0;
6391     virtual void DebugLogAllAllocations() const = 0;
6392 
6393 protected:
GetAllocationCallbacks()6394     const VkAllocationCallbacks* GetAllocationCallbacks() const { return m_pAllocationCallbacks; }
GetBufferImageGranularity()6395     VkDeviceSize GetBufferImageGranularity() const { return m_BufferImageGranularity; }
GetDebugMargin()6396     VkDeviceSize GetDebugMargin() const { return IsVirtual() ? 0 : VMA_DEBUG_MARGIN; }
6397 
6398     void DebugLogAllocation(VkDeviceSize offset, VkDeviceSize size, void* userData) const;
6399 #if VMA_STATS_STRING_ENABLED
6400     // mapRefCount == UINT32_MAX means unspecified.
6401     void PrintDetailedMap_Begin(class VmaJsonWriter& json,
6402         VkDeviceSize unusedBytes,
6403         size_t allocationCount,
6404         size_t unusedRangeCount) const;
6405     void PrintDetailedMap_Allocation(class VmaJsonWriter& json,
6406         VkDeviceSize offset, VkDeviceSize size, void* userData) const;
6407     void PrintDetailedMap_UnusedRange(class VmaJsonWriter& json,
6408         VkDeviceSize offset,
6409         VkDeviceSize size) const;
6410     void PrintDetailedMap_End(class VmaJsonWriter& json) const;
6411 #endif
6412 
6413 private:
6414     VkDeviceSize m_Size;
6415     const VkAllocationCallbacks* m_pAllocationCallbacks;
6416     const VkDeviceSize m_BufferImageGranularity;
6417     const bool m_IsVirtual;
6418 };
6419 
6420 #ifndef _VMA_BLOCK_METADATA_FUNCTIONS
VmaBlockMetadata(const VkAllocationCallbacks * pAllocationCallbacks,VkDeviceSize bufferImageGranularity,bool isVirtual)6421 VmaBlockMetadata::VmaBlockMetadata(const VkAllocationCallbacks* pAllocationCallbacks,
6422     VkDeviceSize bufferImageGranularity, bool isVirtual)
6423     : m_Size(0),
6424     m_pAllocationCallbacks(pAllocationCallbacks),
6425     m_BufferImageGranularity(bufferImageGranularity),
6426     m_IsVirtual(isVirtual) {}
6427 
DebugLogAllocation(VkDeviceSize offset,VkDeviceSize size,void * userData)6428 void VmaBlockMetadata::DebugLogAllocation(VkDeviceSize offset, VkDeviceSize size, void* userData) const
6429 {
6430     if (IsVirtual())
6431     {
6432         VMA_DEBUG_LOG("UNFREED VIRTUAL ALLOCATION; Offset: %llu; Size: %llu; UserData: %p", offset, size, userData);
6433     }
6434     else
6435     {
6436         VMA_ASSERT(userData != VMA_NULL);
6437         VmaAllocation allocation = reinterpret_cast<VmaAllocation>(userData);
6438 
6439         userData = allocation->GetUserData();
6440         const char* name = allocation->GetName();
6441 
6442 #if VMA_STATS_STRING_ENABLED
6443         VMA_DEBUG_LOG("UNFREED ALLOCATION; Offset: %llu; Size: %llu; UserData: %p; Name: %s; Type: %s; Usage: %u",
6444             offset, size, userData, name ? name : "vma_empty",
6445             VMA_SUBALLOCATION_TYPE_NAMES[allocation->GetSuballocationType()],
6446             allocation->GetBufferImageUsage());
6447 #else
6448         VMA_DEBUG_LOG("UNFREED ALLOCATION; Offset: %llu; Size: %llu; UserData: %p; Name: %s; Type: %u",
6449             offset, size, userData, name ? name : "vma_empty",
6450             (uint32_t)allocation->GetSuballocationType());
6451 #endif // VMA_STATS_STRING_ENABLED
6452     }
6453 
6454 }
6455 
6456 #if VMA_STATS_STRING_ENABLED
PrintDetailedMap_Begin(class VmaJsonWriter & json,VkDeviceSize unusedBytes,size_t allocationCount,size_t unusedRangeCount)6457 void VmaBlockMetadata::PrintDetailedMap_Begin(class VmaJsonWriter& json,
6458     VkDeviceSize unusedBytes, size_t allocationCount, size_t unusedRangeCount) const
6459 {
6460     json.WriteString("TotalBytes");
6461     json.WriteNumber(GetSize());
6462 
6463     json.WriteString("UnusedBytes");
6464     json.WriteSize(unusedBytes);
6465 
6466     json.WriteString("Allocations");
6467     json.WriteSize(allocationCount);
6468 
6469     json.WriteString("UnusedRanges");
6470     json.WriteSize(unusedRangeCount);
6471 
6472     json.WriteString("Suballocations");
6473     json.BeginArray();
6474 }
6475 
PrintDetailedMap_Allocation(class VmaJsonWriter & json,VkDeviceSize offset,VkDeviceSize size,void * userData)6476 void VmaBlockMetadata::PrintDetailedMap_Allocation(class VmaJsonWriter& json,
6477     VkDeviceSize offset, VkDeviceSize size, void* userData) const
6478 {
6479     json.BeginObject(true);
6480 
6481     json.WriteString("Offset");
6482     json.WriteNumber(offset);
6483 
6484     if (IsVirtual())
6485     {
6486         json.WriteString("Size");
6487         json.WriteNumber(size);
6488         if (userData)
6489         {
6490             json.WriteString("CustomData");
6491             json.BeginString();
6492             json.ContinueString_Pointer(userData);
6493             json.EndString();
6494         }
6495     }
6496     else
6497     {
6498         ((VmaAllocation)userData)->PrintParameters(json);
6499     }
6500 
6501     json.EndObject();
6502 }
6503 
PrintDetailedMap_UnusedRange(class VmaJsonWriter & json,VkDeviceSize offset,VkDeviceSize size)6504 void VmaBlockMetadata::PrintDetailedMap_UnusedRange(class VmaJsonWriter& json,
6505     VkDeviceSize offset, VkDeviceSize size) const
6506 {
6507     json.BeginObject(true);
6508 
6509     json.WriteString("Offset");
6510     json.WriteNumber(offset);
6511 
6512     json.WriteString("Type");
6513     json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[VMA_SUBALLOCATION_TYPE_FREE]);
6514 
6515     json.WriteString("Size");
6516     json.WriteNumber(size);
6517 
6518     json.EndObject();
6519 }
6520 
PrintDetailedMap_End(class VmaJsonWriter & json)6521 void VmaBlockMetadata::PrintDetailedMap_End(class VmaJsonWriter& json) const
6522 {
6523     json.EndArray();
6524 }
6525 #endif // VMA_STATS_STRING_ENABLED
6526 #endif // _VMA_BLOCK_METADATA_FUNCTIONS
6527 #endif // _VMA_BLOCK_METADATA
6528 
6529 #ifndef _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY
6530 // Before deleting object of this class remember to call 'Destroy()'
6531 class VmaBlockBufferImageGranularity final
6532 {
6533 public:
6534     struct ValidationContext
6535     {
6536         const VkAllocationCallbacks* allocCallbacks;
6537         uint16_t* pageAllocs;
6538     };
6539 
6540     VmaBlockBufferImageGranularity(VkDeviceSize bufferImageGranularity);
6541     ~VmaBlockBufferImageGranularity();
6542 
IsEnabled()6543     bool IsEnabled() const { return m_BufferImageGranularity > MAX_LOW_BUFFER_IMAGE_GRANULARITY; }
6544 
6545     void Init(const VkAllocationCallbacks* pAllocationCallbacks, VkDeviceSize size);
6546     // Before destroying object you must call free it's memory
6547     void Destroy(const VkAllocationCallbacks* pAllocationCallbacks);
6548 
6549     void RoundupAllocRequest(VmaSuballocationType allocType,
6550         VkDeviceSize& inOutAllocSize,
6551         VkDeviceSize& inOutAllocAlignment) const;
6552 
6553     bool CheckConflictAndAlignUp(VkDeviceSize& inOutAllocOffset,
6554         VkDeviceSize allocSize,
6555         VkDeviceSize blockOffset,
6556         VkDeviceSize blockSize,
6557         VmaSuballocationType allocType) const;
6558 
6559     void AllocPages(uint8_t allocType, VkDeviceSize offset, VkDeviceSize size);
6560     void FreePages(VkDeviceSize offset, VkDeviceSize size);
6561     void Clear();
6562 
6563     ValidationContext StartValidation(const VkAllocationCallbacks* pAllocationCallbacks,
6564         bool isVirutal) const;
6565     bool Validate(ValidationContext& ctx, VkDeviceSize offset, VkDeviceSize size) const;
6566     bool FinishValidation(ValidationContext& ctx) const;
6567 
6568 private:
6569     static const uint16_t MAX_LOW_BUFFER_IMAGE_GRANULARITY = 256;
6570 
6571     struct RegionInfo
6572     {
6573         uint8_t allocType;
6574         uint16_t allocCount;
6575     };
6576 
6577     VkDeviceSize m_BufferImageGranularity;
6578     uint32_t m_RegionCount;
6579     RegionInfo* m_RegionInfo;
6580 
GetStartPage(VkDeviceSize offset)6581     uint32_t GetStartPage(VkDeviceSize offset) const { return OffsetToPageIndex(offset & ~(m_BufferImageGranularity - 1)); }
GetEndPage(VkDeviceSize offset,VkDeviceSize size)6582     uint32_t GetEndPage(VkDeviceSize offset, VkDeviceSize size) const { return OffsetToPageIndex((offset + size - 1) & ~(m_BufferImageGranularity - 1)); }
6583 
6584     uint32_t OffsetToPageIndex(VkDeviceSize offset) const;
6585     void AllocPage(RegionInfo& page, uint8_t allocType);
6586 };
6587 
6588 #ifndef _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY_FUNCTIONS
VmaBlockBufferImageGranularity(VkDeviceSize bufferImageGranularity)6589 VmaBlockBufferImageGranularity::VmaBlockBufferImageGranularity(VkDeviceSize bufferImageGranularity)
6590     : m_BufferImageGranularity(bufferImageGranularity),
6591     m_RegionCount(0),
6592     m_RegionInfo(VMA_NULL) {}
6593 
~VmaBlockBufferImageGranularity()6594 VmaBlockBufferImageGranularity::~VmaBlockBufferImageGranularity()
6595 {
6596     VMA_ASSERT(m_RegionInfo == VMA_NULL && "Free not called before destroying object!");
6597 }
6598 
Init(const VkAllocationCallbacks * pAllocationCallbacks,VkDeviceSize size)6599 void VmaBlockBufferImageGranularity::Init(const VkAllocationCallbacks* pAllocationCallbacks, VkDeviceSize size)
6600 {
6601     if (IsEnabled())
6602     {
6603         m_RegionCount = static_cast<uint32_t>(VmaDivideRoundingUp(size, m_BufferImageGranularity));
6604         m_RegionInfo = vma_new_array(pAllocationCallbacks, RegionInfo, m_RegionCount);
6605         memset(m_RegionInfo, 0, m_RegionCount * sizeof(RegionInfo));
6606     }
6607 }
6608 
Destroy(const VkAllocationCallbacks * pAllocationCallbacks)6609 void VmaBlockBufferImageGranularity::Destroy(const VkAllocationCallbacks* pAllocationCallbacks)
6610 {
6611     if (m_RegionInfo)
6612     {
6613         vma_delete_array(pAllocationCallbacks, m_RegionInfo, m_RegionCount);
6614         m_RegionInfo = VMA_NULL;
6615     }
6616 }
6617 
RoundupAllocRequest(VmaSuballocationType allocType,VkDeviceSize & inOutAllocSize,VkDeviceSize & inOutAllocAlignment)6618 void VmaBlockBufferImageGranularity::RoundupAllocRequest(VmaSuballocationType allocType,
6619     VkDeviceSize& inOutAllocSize,
6620     VkDeviceSize& inOutAllocAlignment) const
6621 {
6622     if (m_BufferImageGranularity > 1 &&
6623         m_BufferImageGranularity <= MAX_LOW_BUFFER_IMAGE_GRANULARITY)
6624     {
6625         if (allocType == VMA_SUBALLOCATION_TYPE_UNKNOWN ||
6626             allocType == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
6627             allocType == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL)
6628         {
6629             inOutAllocAlignment = VMA_MAX(inOutAllocAlignment, m_BufferImageGranularity);
6630             inOutAllocSize = VmaAlignUp(inOutAllocSize, m_BufferImageGranularity);
6631         }
6632     }
6633 }
6634 
CheckConflictAndAlignUp(VkDeviceSize & inOutAllocOffset,VkDeviceSize allocSize,VkDeviceSize blockOffset,VkDeviceSize blockSize,VmaSuballocationType allocType)6635 bool VmaBlockBufferImageGranularity::CheckConflictAndAlignUp(VkDeviceSize& inOutAllocOffset,
6636     VkDeviceSize allocSize,
6637     VkDeviceSize blockOffset,
6638     VkDeviceSize blockSize,
6639     VmaSuballocationType allocType) const
6640 {
6641     if (IsEnabled())
6642     {
6643         uint32_t startPage = GetStartPage(inOutAllocOffset);
6644         if (m_RegionInfo[startPage].allocCount > 0 &&
6645             VmaIsBufferImageGranularityConflict(static_cast<VmaSuballocationType>(m_RegionInfo[startPage].allocType), allocType))
6646         {
6647             inOutAllocOffset = VmaAlignUp(inOutAllocOffset, m_BufferImageGranularity);
6648             if (blockSize < allocSize + inOutAllocOffset - blockOffset)
6649                 return true;
6650             ++startPage;
6651         }
6652         uint32_t endPage = GetEndPage(inOutAllocOffset, allocSize);
6653         if (endPage != startPage &&
6654             m_RegionInfo[endPage].allocCount > 0 &&
6655             VmaIsBufferImageGranularityConflict(static_cast<VmaSuballocationType>(m_RegionInfo[endPage].allocType), allocType))
6656         {
6657             return true;
6658         }
6659     }
6660     return false;
6661 }
6662 
AllocPages(uint8_t allocType,VkDeviceSize offset,VkDeviceSize size)6663 void VmaBlockBufferImageGranularity::AllocPages(uint8_t allocType, VkDeviceSize offset, VkDeviceSize size)
6664 {
6665     if (IsEnabled())
6666     {
6667         uint32_t startPage = GetStartPage(offset);
6668         AllocPage(m_RegionInfo[startPage], allocType);
6669 
6670         uint32_t endPage = GetEndPage(offset, size);
6671         if (startPage != endPage)
6672             AllocPage(m_RegionInfo[endPage], allocType);
6673     }
6674 }
6675 
FreePages(VkDeviceSize offset,VkDeviceSize size)6676 void VmaBlockBufferImageGranularity::FreePages(VkDeviceSize offset, VkDeviceSize size)
6677 {
6678     if (IsEnabled())
6679     {
6680         uint32_t startPage = GetStartPage(offset);
6681         --m_RegionInfo[startPage].allocCount;
6682         if (m_RegionInfo[startPage].allocCount == 0)
6683             m_RegionInfo[startPage].allocType = VMA_SUBALLOCATION_TYPE_FREE;
6684         uint32_t endPage = GetEndPage(offset, size);
6685         if (startPage != endPage)
6686         {
6687             --m_RegionInfo[endPage].allocCount;
6688             if (m_RegionInfo[endPage].allocCount == 0)
6689                 m_RegionInfo[endPage].allocType = VMA_SUBALLOCATION_TYPE_FREE;
6690         }
6691     }
6692 }
6693 
Clear()6694 void VmaBlockBufferImageGranularity::Clear()
6695 {
6696     if (m_RegionInfo)
6697         memset(m_RegionInfo, 0, m_RegionCount * sizeof(RegionInfo));
6698 }
6699 
StartValidation(const VkAllocationCallbacks * pAllocationCallbacks,bool isVirutal)6700 VmaBlockBufferImageGranularity::ValidationContext VmaBlockBufferImageGranularity::StartValidation(
6701     const VkAllocationCallbacks* pAllocationCallbacks, bool isVirutal) const
6702 {
6703     ValidationContext ctx{ pAllocationCallbacks, VMA_NULL };
6704     if (!isVirutal && IsEnabled())
6705     {
6706         ctx.pageAllocs = vma_new_array(pAllocationCallbacks, uint16_t, m_RegionCount);
6707         memset(ctx.pageAllocs, 0, m_RegionCount * sizeof(uint16_t));
6708     }
6709     return ctx;
6710 }
6711 
Validate(ValidationContext & ctx,VkDeviceSize offset,VkDeviceSize size)6712 bool VmaBlockBufferImageGranularity::Validate(ValidationContext& ctx,
6713     VkDeviceSize offset, VkDeviceSize size) const
6714 {
6715     if (IsEnabled())
6716     {
6717         uint32_t start = GetStartPage(offset);
6718         ++ctx.pageAllocs[start];
6719         VMA_VALIDATE(m_RegionInfo[start].allocCount > 0);
6720 
6721         uint32_t end = GetEndPage(offset, size);
6722         if (start != end)
6723         {
6724             ++ctx.pageAllocs[end];
6725             VMA_VALIDATE(m_RegionInfo[end].allocCount > 0);
6726         }
6727     }
6728     return true;
6729 }
6730 
FinishValidation(ValidationContext & ctx)6731 bool VmaBlockBufferImageGranularity::FinishValidation(ValidationContext& ctx) const
6732 {
6733     // Check proper page structure
6734     if (IsEnabled())
6735     {
6736         VMA_ASSERT(ctx.pageAllocs != VMA_NULL && "Validation context not initialized!");
6737 
6738         for (uint32_t page = 0; page < m_RegionCount; ++page)
6739         {
6740             VMA_VALIDATE(ctx.pageAllocs[page] == m_RegionInfo[page].allocCount);
6741         }
6742         vma_delete_array(ctx.allocCallbacks, ctx.pageAllocs, m_RegionCount);
6743         ctx.pageAllocs = VMA_NULL;
6744     }
6745     return true;
6746 }
6747 
OffsetToPageIndex(VkDeviceSize offset)6748 uint32_t VmaBlockBufferImageGranularity::OffsetToPageIndex(VkDeviceSize offset) const
6749 {
6750     return static_cast<uint32_t>(offset >> VMA_BITSCAN_MSB(m_BufferImageGranularity));
6751 }
6752 
AllocPage(RegionInfo & page,uint8_t allocType)6753 void VmaBlockBufferImageGranularity::AllocPage(RegionInfo& page, uint8_t allocType)
6754 {
6755     // When current alloc type is free then it can be overriden by new type
6756     if (page.allocCount == 0 || (page.allocCount > 0 && page.allocType == VMA_SUBALLOCATION_TYPE_FREE))
6757         page.allocType = allocType;
6758 
6759     ++page.allocCount;
6760 }
6761 #endif // _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY_FUNCTIONS
6762 #endif // _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY
6763 
6764 #if 0
6765 #ifndef _VMA_BLOCK_METADATA_GENERIC
6766 class VmaBlockMetadata_Generic : public VmaBlockMetadata
6767 {
6768     friend class VmaDefragmentationAlgorithm_Generic;
6769     friend class VmaDefragmentationAlgorithm_Fast;
6770     VMA_CLASS_NO_COPY(VmaBlockMetadata_Generic)
6771 public:
6772     VmaBlockMetadata_Generic(const VkAllocationCallbacks* pAllocationCallbacks,
6773         VkDeviceSize bufferImageGranularity, bool isVirtual);
6774     virtual ~VmaBlockMetadata_Generic() = default;
6775 
6776     size_t GetAllocationCount() const override { return m_Suballocations.size() - m_FreeCount; }
6777     VkDeviceSize GetSumFreeSize() const override { return m_SumFreeSize; }
6778     bool IsEmpty() const override { return (m_Suballocations.size() == 1) && (m_FreeCount == 1); }
6779     void Free(VmaAllocHandle allocHandle) override { FreeSuballocation(FindAtOffset((VkDeviceSize)allocHandle - 1)); }
6780     VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const override { return (VkDeviceSize)allocHandle - 1; };
6781 
6782     void Init(VkDeviceSize size) override;
6783     bool Validate() const override;
6784 
6785     void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const override;
6786     void AddStatistics(VmaStatistics& inoutStats) const override;
6787 
6788 #if VMA_STATS_STRING_ENABLED
6789     void PrintDetailedMap(class VmaJsonWriter& json, uint32_t mapRefCount) const override;
6790 #endif
6791 
6792     bool CreateAllocationRequest(
6793         VkDeviceSize allocSize,
6794         VkDeviceSize allocAlignment,
6795         bool upperAddress,
6796         VmaSuballocationType allocType,
6797         uint32_t strategy,
6798         VmaAllocationRequest* pAllocationRequest) override;
6799 
6800     VkResult CheckCorruption(const void* pBlockData) override;
6801 
6802     void Alloc(
6803         const VmaAllocationRequest& request,
6804         VmaSuballocationType type,
6805         void* userData) override;
6806 
6807     void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) override;
6808     void* GetAllocationUserData(VmaAllocHandle allocHandle) const override;
6809     VmaAllocHandle GetAllocationListBegin() const override;
6810     VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const override;
6811     void Clear() override;
6812     void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) override;
6813     void DebugLogAllAllocations() const override;
6814 
6815 private:
6816     uint32_t m_FreeCount;
6817     VkDeviceSize m_SumFreeSize;
6818     VmaSuballocationList m_Suballocations;
6819     // Suballocations that are free. Sorted by size, ascending.
6820     VmaVector<VmaSuballocationList::iterator, VmaStlAllocator<VmaSuballocationList::iterator>> m_FreeSuballocationsBySize;
6821 
6822     VkDeviceSize AlignAllocationSize(VkDeviceSize size) const { return IsVirtual() ? size : VmaAlignUp(size, (VkDeviceSize)16); }
6823 
6824     VmaSuballocationList::iterator FindAtOffset(VkDeviceSize offset) const;
6825     bool ValidateFreeSuballocationList() const;
6826 
6827     // Checks if requested suballocation with given parameters can be placed in given pFreeSuballocItem.
6828     // If yes, fills pOffset and returns true. If no, returns false.
6829     bool CheckAllocation(
6830         VkDeviceSize allocSize,
6831         VkDeviceSize allocAlignment,
6832         VmaSuballocationType allocType,
6833         VmaSuballocationList::const_iterator suballocItem,
6834         VmaAllocHandle* pAllocHandle) const;
6835 
6836     // Given free suballocation, it merges it with following one, which must also be free.
6837     void MergeFreeWithNext(VmaSuballocationList::iterator item);
6838     // Releases given suballocation, making it free.
6839     // Merges it with adjacent free suballocations if applicable.
6840     // Returns iterator to new free suballocation at this place.
6841     VmaSuballocationList::iterator FreeSuballocation(VmaSuballocationList::iterator suballocItem);
6842     // Given free suballocation, it inserts it into sorted list of
6843     // m_FreeSuballocationsBySize if it is suitable.
6844     void RegisterFreeSuballocation(VmaSuballocationList::iterator item);
6845     // Given free suballocation, it removes it from sorted list of
6846     // m_FreeSuballocationsBySize if it is suitable.
6847     void UnregisterFreeSuballocation(VmaSuballocationList::iterator item);
6848 };
6849 
6850 #ifndef _VMA_BLOCK_METADATA_GENERIC_FUNCTIONS
6851 VmaBlockMetadata_Generic::VmaBlockMetadata_Generic(const VkAllocationCallbacks* pAllocationCallbacks,
6852     VkDeviceSize bufferImageGranularity, bool isVirtual)
6853     : VmaBlockMetadata(pAllocationCallbacks, bufferImageGranularity, isVirtual),
6854     m_FreeCount(0),
6855     m_SumFreeSize(0),
6856     m_Suballocations(VmaStlAllocator<VmaSuballocation>(pAllocationCallbacks)),
6857     m_FreeSuballocationsBySize(VmaStlAllocator<VmaSuballocationList::iterator>(pAllocationCallbacks)) {}
6858 
6859 void VmaBlockMetadata_Generic::Init(VkDeviceSize size)
6860 {
6861     VmaBlockMetadata::Init(size);
6862 
6863     m_FreeCount = 1;
6864     m_SumFreeSize = size;
6865 
6866     VmaSuballocation suballoc = {};
6867     suballoc.offset = 0;
6868     suballoc.size = size;
6869     suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
6870 
6871     m_Suballocations.push_back(suballoc);
6872     m_FreeSuballocationsBySize.push_back(m_Suballocations.begin());
6873 }
6874 
6875 bool VmaBlockMetadata_Generic::Validate() const
6876 {
6877     VMA_VALIDATE(!m_Suballocations.empty());
6878 
6879     // Expected offset of new suballocation as calculated from previous ones.
6880     VkDeviceSize calculatedOffset = 0;
6881     // Expected number of free suballocations as calculated from traversing their list.
6882     uint32_t calculatedFreeCount = 0;
6883     // Expected sum size of free suballocations as calculated from traversing their list.
6884     VkDeviceSize calculatedSumFreeSize = 0;
6885     // Expected number of free suballocations that should be registered in
6886     // m_FreeSuballocationsBySize calculated from traversing their list.
6887     size_t freeSuballocationsToRegister = 0;
6888     // True if previous visited suballocation was free.
6889     bool prevFree = false;
6890 
6891     const VkDeviceSize debugMargin = GetDebugMargin();
6892 
6893     for (const auto& subAlloc : m_Suballocations)
6894     {
6895         // Actual offset of this suballocation doesn't match expected one.
6896         VMA_VALIDATE(subAlloc.offset == calculatedOffset);
6897 
6898         const bool currFree = (subAlloc.type == VMA_SUBALLOCATION_TYPE_FREE);
6899         // Two adjacent free suballocations are invalid. They should be merged.
6900         VMA_VALIDATE(!prevFree || !currFree);
6901 
6902         VmaAllocation alloc = (VmaAllocation)subAlloc.userData;
6903         if (!IsVirtual())
6904         {
6905             VMA_VALIDATE(currFree == (alloc == VK_NULL_HANDLE));
6906         }
6907 
6908         if (currFree)
6909         {
6910             calculatedSumFreeSize += subAlloc.size;
6911             ++calculatedFreeCount;
6912             ++freeSuballocationsToRegister;
6913 
6914             // Margin required between allocations - every free space must be at least that large.
6915             VMA_VALIDATE(subAlloc.size >= debugMargin);
6916         }
6917         else
6918         {
6919             if (!IsVirtual())
6920             {
6921                 VMA_VALIDATE((VkDeviceSize)alloc->GetAllocHandle() == subAlloc.offset + 1);
6922                 VMA_VALIDATE(alloc->GetSize() == subAlloc.size);
6923             }
6924 
6925             // Margin required between allocations - previous allocation must be free.
6926             VMA_VALIDATE(debugMargin == 0 || prevFree);
6927         }
6928 
6929         calculatedOffset += subAlloc.size;
6930         prevFree = currFree;
6931     }
6932 
6933     // Number of free suballocations registered in m_FreeSuballocationsBySize doesn't
6934     // match expected one.
6935     VMA_VALIDATE(m_FreeSuballocationsBySize.size() == freeSuballocationsToRegister);
6936 
6937     VkDeviceSize lastSize = 0;
6938     for (size_t i = 0; i < m_FreeSuballocationsBySize.size(); ++i)
6939     {
6940         VmaSuballocationList::iterator suballocItem = m_FreeSuballocationsBySize[i];
6941 
6942         // Only free suballocations can be registered in m_FreeSuballocationsBySize.
6943         VMA_VALIDATE(suballocItem->type == VMA_SUBALLOCATION_TYPE_FREE);
6944         // They must be sorted by size ascending.
6945         VMA_VALIDATE(suballocItem->size >= lastSize);
6946 
6947         lastSize = suballocItem->size;
6948     }
6949 
6950     // Check if totals match calculated values.
6951     VMA_VALIDATE(ValidateFreeSuballocationList());
6952     VMA_VALIDATE(calculatedOffset == GetSize());
6953     VMA_VALIDATE(calculatedSumFreeSize == m_SumFreeSize);
6954     VMA_VALIDATE(calculatedFreeCount == m_FreeCount);
6955 
6956     return true;
6957 }
6958 
6959 void VmaBlockMetadata_Generic::AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const
6960 {
6961     const uint32_t rangeCount = (uint32_t)m_Suballocations.size();
6962     inoutStats.statistics.blockCount++;
6963     inoutStats.statistics.blockBytes += GetSize();
6964 
6965     for (const auto& suballoc : m_Suballocations)
6966     {
6967         if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
6968             VmaAddDetailedStatisticsAllocation(inoutStats, suballoc.size);
6969         else
6970             VmaAddDetailedStatisticsUnusedRange(inoutStats, suballoc.size);
6971     }
6972 }
6973 
6974 void VmaBlockMetadata_Generic::AddStatistics(VmaStatistics& inoutStats) const
6975 {
6976     inoutStats.blockCount++;
6977     inoutStats.allocationCount += (uint32_t)m_Suballocations.size() - m_FreeCount;
6978     inoutStats.blockBytes += GetSize();
6979     inoutStats.allocationBytes += GetSize() - m_SumFreeSize;
6980 }
6981 
6982 #if VMA_STATS_STRING_ENABLED
6983 void VmaBlockMetadata_Generic::PrintDetailedMap(class VmaJsonWriter& json, uint32_t mapRefCount) const
6984 {
6985     PrintDetailedMap_Begin(json,
6986         m_SumFreeSize, // unusedBytes
6987         m_Suballocations.size() - (size_t)m_FreeCount, // allocationCount
6988         m_FreeCount, // unusedRangeCount
6989         mapRefCount);
6990 
6991     for (const auto& suballoc : m_Suballocations)
6992     {
6993         if (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE)
6994         {
6995             PrintDetailedMap_UnusedRange(json, suballoc.offset, suballoc.size);
6996         }
6997         else
6998         {
6999             PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.size, suballoc.userData);
7000         }
7001     }
7002 
7003     PrintDetailedMap_End(json);
7004 }
7005 #endif // VMA_STATS_STRING_ENABLED
7006 
7007 bool VmaBlockMetadata_Generic::CreateAllocationRequest(
7008     VkDeviceSize allocSize,
7009     VkDeviceSize allocAlignment,
7010     bool upperAddress,
7011     VmaSuballocationType allocType,
7012     uint32_t strategy,
7013     VmaAllocationRequest* pAllocationRequest)
7014 {
7015     VMA_ASSERT(allocSize > 0);
7016     VMA_ASSERT(!upperAddress);
7017     VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
7018     VMA_ASSERT(pAllocationRequest != VMA_NULL);
7019     VMA_HEAVY_ASSERT(Validate());
7020 
7021     allocSize = AlignAllocationSize(allocSize);
7022 
7023     pAllocationRequest->type = VmaAllocationRequestType::Normal;
7024     pAllocationRequest->size = allocSize;
7025 
7026     const VkDeviceSize debugMargin = GetDebugMargin();
7027 
7028     // There is not enough total free space in this block to fulfill the request: Early return.
7029     if (m_SumFreeSize < allocSize + debugMargin)
7030     {
7031         return false;
7032     }
7033 
7034     // New algorithm, efficiently searching freeSuballocationsBySize.
7035     const size_t freeSuballocCount = m_FreeSuballocationsBySize.size();
7036     if (freeSuballocCount > 0)
7037     {
7038         if (strategy == 0 ||
7039             strategy == VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT)
7040         {
7041             // Find first free suballocation with size not less than allocSize + debugMargin.
7042             VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
7043                 m_FreeSuballocationsBySize.data(),
7044                 m_FreeSuballocationsBySize.data() + freeSuballocCount,
7045                 allocSize + debugMargin,
7046                 VmaSuballocationItemSizeLess());
7047             size_t index = it - m_FreeSuballocationsBySize.data();
7048             for (; index < freeSuballocCount; ++index)
7049             {
7050                 if (CheckAllocation(
7051                     allocSize,
7052                     allocAlignment,
7053                     allocType,
7054                     m_FreeSuballocationsBySize[index],
7055                     &pAllocationRequest->allocHandle))
7056                 {
7057                     pAllocationRequest->item = m_FreeSuballocationsBySize[index];
7058                     return true;
7059                 }
7060             }
7061         }
7062         else if (strategy == VMA_ALLOCATION_INTERNAL_STRATEGY_MIN_OFFSET)
7063         {
7064             for (VmaSuballocationList::iterator it = m_Suballocations.begin();
7065                 it != m_Suballocations.end();
7066                 ++it)
7067             {
7068                 if (it->type == VMA_SUBALLOCATION_TYPE_FREE && CheckAllocation(
7069                     allocSize,
7070                     allocAlignment,
7071                     allocType,
7072                     it,
7073                     &pAllocationRequest->allocHandle))
7074                 {
7075                     pAllocationRequest->item = it;
7076                     return true;
7077                 }
7078             }
7079         }
7080         else
7081         {
7082             VMA_ASSERT(strategy & (VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT | VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT ));
7083             // Search staring from biggest suballocations.
7084             for (size_t index = freeSuballocCount; index--; )
7085             {
7086                 if (CheckAllocation(
7087                     allocSize,
7088                     allocAlignment,
7089                     allocType,
7090                     m_FreeSuballocationsBySize[index],
7091                     &pAllocationRequest->allocHandle))
7092                 {
7093                     pAllocationRequest->item = m_FreeSuballocationsBySize[index];
7094                     return true;
7095                 }
7096             }
7097         }
7098     }
7099 
7100     return false;
7101 }
7102 
7103 VkResult VmaBlockMetadata_Generic::CheckCorruption(const void* pBlockData)
7104 {
7105     for (auto& suballoc : m_Suballocations)
7106     {
7107         if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
7108         {
7109             if (!VmaValidateMagicValue(pBlockData, suballoc.offset + suballoc.size))
7110             {
7111                 VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
7112                 return VK_ERROR_UNKNOWN_COPY;
7113             }
7114         }
7115     }
7116 
7117     return VK_SUCCESS;
7118 }
7119 
7120 void VmaBlockMetadata_Generic::Alloc(
7121     const VmaAllocationRequest& request,
7122     VmaSuballocationType type,
7123     void* userData)
7124 {
7125     VMA_ASSERT(request.type == VmaAllocationRequestType::Normal);
7126     VMA_ASSERT(request.item != m_Suballocations.end());
7127     VmaSuballocation& suballoc = *request.item;
7128     // Given suballocation is a free block.
7129     VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
7130 
7131     // Given offset is inside this suballocation.
7132     VMA_ASSERT((VkDeviceSize)request.allocHandle - 1 >= suballoc.offset);
7133     const VkDeviceSize paddingBegin = (VkDeviceSize)request.allocHandle - suballoc.offset - 1;
7134     VMA_ASSERT(suballoc.size >= paddingBegin + request.size);
7135     const VkDeviceSize paddingEnd = suballoc.size - paddingBegin - request.size;
7136 
7137     // Unregister this free suballocation from m_FreeSuballocationsBySize and update
7138     // it to become used.
7139     UnregisterFreeSuballocation(request.item);
7140 
7141     suballoc.offset = (VkDeviceSize)request.allocHandle - 1;
7142     suballoc.size = request.size;
7143     suballoc.type = type;
7144     suballoc.userData = userData;
7145 
7146     // If there are any free bytes remaining at the end, insert new free suballocation after current one.
7147     if (paddingEnd)
7148     {
7149         VmaSuballocation paddingSuballoc = {};
7150         paddingSuballoc.offset = suballoc.offset + suballoc.size;
7151         paddingSuballoc.size = paddingEnd;
7152         paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
7153         VmaSuballocationList::iterator next = request.item;
7154         ++next;
7155         const VmaSuballocationList::iterator paddingEndItem =
7156             m_Suballocations.insert(next, paddingSuballoc);
7157         RegisterFreeSuballocation(paddingEndItem);
7158     }
7159 
7160     // If there are any free bytes remaining at the beginning, insert new free suballocation before current one.
7161     if (paddingBegin)
7162     {
7163         VmaSuballocation paddingSuballoc = {};
7164         paddingSuballoc.offset = suballoc.offset - paddingBegin;
7165         paddingSuballoc.size = paddingBegin;
7166         paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
7167         const VmaSuballocationList::iterator paddingBeginItem =
7168             m_Suballocations.insert(request.item, paddingSuballoc);
7169         RegisterFreeSuballocation(paddingBeginItem);
7170     }
7171 
7172     // Update totals.
7173     m_FreeCount = m_FreeCount - 1;
7174     if (paddingBegin > 0)
7175     {
7176         ++m_FreeCount;
7177     }
7178     if (paddingEnd > 0)
7179     {
7180         ++m_FreeCount;
7181     }
7182     m_SumFreeSize -= request.size;
7183 }
7184 
7185 void VmaBlockMetadata_Generic::GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo)
7186 {
7187     outInfo.offset = (VkDeviceSize)allocHandle - 1;
7188     const VmaSuballocation& suballoc = *FindAtOffset(outInfo.offset);
7189     outInfo.size = suballoc.size;
7190     outInfo.pUserData = suballoc.userData;
7191 }
7192 
7193 void* VmaBlockMetadata_Generic::GetAllocationUserData(VmaAllocHandle allocHandle) const
7194 {
7195     return FindAtOffset((VkDeviceSize)allocHandle - 1)->userData;
7196 }
7197 
7198 VmaAllocHandle VmaBlockMetadata_Generic::GetAllocationListBegin() const
7199 {
7200     if (IsEmpty())
7201         return VK_NULL_HANDLE;
7202 
7203     for (const auto& suballoc : m_Suballocations)
7204     {
7205         if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
7206             return (VmaAllocHandle)(suballoc.offset + 1);
7207     }
7208     VMA_ASSERT(false && "Should contain at least 1 allocation!");
7209     return VK_NULL_HANDLE;
7210 }
7211 
7212 VmaAllocHandle VmaBlockMetadata_Generic::GetNextAllocation(VmaAllocHandle prevAlloc) const
7213 {
7214     VmaSuballocationList::const_iterator prev = FindAtOffset((VkDeviceSize)prevAlloc - 1);
7215 
7216     for (VmaSuballocationList::const_iterator it = ++prev; it != m_Suballocations.end(); ++it)
7217     {
7218         if (it->type != VMA_SUBALLOCATION_TYPE_FREE)
7219             return (VmaAllocHandle)(it->offset + 1);
7220     }
7221     return VK_NULL_HANDLE;
7222 }
7223 
7224 void VmaBlockMetadata_Generic::Clear()
7225 {
7226     const VkDeviceSize size = GetSize();
7227 
7228     VMA_ASSERT(IsVirtual());
7229     m_FreeCount = 1;
7230     m_SumFreeSize = size;
7231     m_Suballocations.clear();
7232     m_FreeSuballocationsBySize.clear();
7233 
7234     VmaSuballocation suballoc = {};
7235     suballoc.offset = 0;
7236     suballoc.size = size;
7237     suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
7238     m_Suballocations.push_back(suballoc);
7239 
7240     m_FreeSuballocationsBySize.push_back(m_Suballocations.begin());
7241 }
7242 
7243 void VmaBlockMetadata_Generic::SetAllocationUserData(VmaAllocHandle allocHandle, void* userData)
7244 {
7245     VmaSuballocation& suballoc = *FindAtOffset((VkDeviceSize)allocHandle - 1);
7246     suballoc.userData = userData;
7247 }
7248 
7249 void VmaBlockMetadata_Generic::DebugLogAllAllocations() const
7250 {
7251     for (const auto& suballoc : m_Suballocations)
7252     {
7253         if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
7254             DebugLogAllocation(suballoc.offset, suballoc.size, suballoc.userData);
7255     }
7256 }
7257 
7258 VmaSuballocationList::iterator VmaBlockMetadata_Generic::FindAtOffset(VkDeviceSize offset) const
7259 {
7260     VMA_HEAVY_ASSERT(!m_Suballocations.empty());
7261     const VkDeviceSize last = m_Suballocations.rbegin()->offset;
7262     if (last == offset)
7263         return m_Suballocations.rbegin().drop_const();
7264     const VkDeviceSize first = m_Suballocations.begin()->offset;
7265     if (first == offset)
7266         return m_Suballocations.begin().drop_const();
7267 
7268     const size_t suballocCount = m_Suballocations.size();
7269     const VkDeviceSize step = (last - first + m_Suballocations.begin()->size) / suballocCount;
7270     auto findSuballocation = [&](auto begin, auto end) -> VmaSuballocationList::iterator
7271     {
7272         for (auto suballocItem = begin;
7273             suballocItem != end;
7274             ++suballocItem)
7275         {
7276             if (suballocItem->offset == offset)
7277                 return suballocItem.drop_const();
7278         }
7279         VMA_ASSERT(false && "Not found!");
7280         return m_Suballocations.end().drop_const();
7281     };
7282     // If requested offset is closer to the end of range, search from the end
7283     if (offset - first > suballocCount * step / 2)
7284     {
7285         return findSuballocation(m_Suballocations.rbegin(), m_Suballocations.rend());
7286     }
7287     return findSuballocation(m_Suballocations.begin(), m_Suballocations.end());
7288 }
7289 
7290 bool VmaBlockMetadata_Generic::ValidateFreeSuballocationList() const
7291 {
7292     VkDeviceSize lastSize = 0;
7293     for (size_t i = 0, count = m_FreeSuballocationsBySize.size(); i < count; ++i)
7294     {
7295         const VmaSuballocationList::iterator it = m_FreeSuballocationsBySize[i];
7296 
7297         VMA_VALIDATE(it->type == VMA_SUBALLOCATION_TYPE_FREE);
7298         VMA_VALIDATE(it->size >= lastSize);
7299         lastSize = it->size;
7300     }
7301     return true;
7302 }
7303 
7304 bool VmaBlockMetadata_Generic::CheckAllocation(
7305     VkDeviceSize allocSize,
7306     VkDeviceSize allocAlignment,
7307     VmaSuballocationType allocType,
7308     VmaSuballocationList::const_iterator suballocItem,
7309     VmaAllocHandle* pAllocHandle) const
7310 {
7311     VMA_ASSERT(allocSize > 0);
7312     VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
7313     VMA_ASSERT(suballocItem != m_Suballocations.cend());
7314     VMA_ASSERT(pAllocHandle != VMA_NULL);
7315 
7316     const VkDeviceSize debugMargin = GetDebugMargin();
7317     const VkDeviceSize bufferImageGranularity = GetBufferImageGranularity();
7318 
7319     const VmaSuballocation& suballoc = *suballocItem;
7320     VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
7321 
7322     // Size of this suballocation is too small for this request: Early return.
7323     if (suballoc.size < allocSize)
7324     {
7325         return false;
7326     }
7327 
7328     // Start from offset equal to beginning of this suballocation.
7329     VkDeviceSize offset = suballoc.offset + (suballocItem == m_Suballocations.cbegin() ? 0 : GetDebugMargin());
7330 
7331     // Apply debugMargin from the end of previous alloc.
7332     if (debugMargin > 0)
7333     {
7334         offset += debugMargin;
7335     }
7336 
7337     // Apply alignment.
7338     offset = VmaAlignUp(offset, allocAlignment);
7339 
7340     // Check previous suballocations for BufferImageGranularity conflicts.
7341     // Make bigger alignment if necessary.
7342     if (bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment)
7343     {
7344         bool bufferImageGranularityConflict = false;
7345         VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
7346         while (prevSuballocItem != m_Suballocations.cbegin())
7347         {
7348             --prevSuballocItem;
7349             const VmaSuballocation& prevSuballoc = *prevSuballocItem;
7350             if (VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, offset, bufferImageGranularity))
7351             {
7352                 if (VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
7353                 {
7354                     bufferImageGranularityConflict = true;
7355                     break;
7356                 }
7357             }
7358             else
7359                 // Already on previous page.
7360                 break;
7361         }
7362         if (bufferImageGranularityConflict)
7363         {
7364             offset = VmaAlignUp(offset, bufferImageGranularity);
7365         }
7366     }
7367 
7368     // Calculate padding at the beginning based on current offset.
7369     const VkDeviceSize paddingBegin = offset - suballoc.offset;
7370 
7371     // Fail if requested size plus margin after is bigger than size of this suballocation.
7372     if (paddingBegin + allocSize + debugMargin > suballoc.size)
7373     {
7374         return false;
7375     }
7376 
7377     // Check next suballocations for BufferImageGranularity conflicts.
7378     // If conflict exists, allocation cannot be made here.
7379     if (allocSize % bufferImageGranularity || offset % bufferImageGranularity)
7380     {
7381         VmaSuballocationList::const_iterator nextSuballocItem = suballocItem;
7382         ++nextSuballocItem;
7383         while (nextSuballocItem != m_Suballocations.cend())
7384         {
7385             const VmaSuballocation& nextSuballoc = *nextSuballocItem;
7386             if (VmaBlocksOnSamePage(offset, allocSize, nextSuballoc.offset, bufferImageGranularity))
7387             {
7388                 if (VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
7389                 {
7390                     return false;
7391                 }
7392             }
7393             else
7394             {
7395                 // Already on next page.
7396                 break;
7397             }
7398             ++nextSuballocItem;
7399         }
7400     }
7401 
7402     *pAllocHandle = (VmaAllocHandle)(offset + 1);
7403     // All tests passed: Success. pAllocHandle is already filled.
7404     return true;
7405 }
7406 
7407 void VmaBlockMetadata_Generic::MergeFreeWithNext(VmaSuballocationList::iterator item)
7408 {
7409     VMA_ASSERT(item != m_Suballocations.end());
7410     VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
7411 
7412     VmaSuballocationList::iterator nextItem = item;
7413     ++nextItem;
7414     VMA_ASSERT(nextItem != m_Suballocations.end());
7415     VMA_ASSERT(nextItem->type == VMA_SUBALLOCATION_TYPE_FREE);
7416 
7417     item->size += nextItem->size;
7418     --m_FreeCount;
7419     m_Suballocations.erase(nextItem);
7420 }
7421 
7422 VmaSuballocationList::iterator VmaBlockMetadata_Generic::FreeSuballocation(VmaSuballocationList::iterator suballocItem)
7423 {
7424     // Change this suballocation to be marked as free.
7425     VmaSuballocation& suballoc = *suballocItem;
7426     suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
7427     suballoc.userData = VMA_NULL;
7428 
7429     // Update totals.
7430     ++m_FreeCount;
7431     m_SumFreeSize += suballoc.size;
7432 
7433     // Merge with previous and/or next suballocation if it's also free.
7434     bool mergeWithNext = false;
7435     bool mergeWithPrev = false;
7436 
7437     VmaSuballocationList::iterator nextItem = suballocItem;
7438     ++nextItem;
7439     if ((nextItem != m_Suballocations.end()) && (nextItem->type == VMA_SUBALLOCATION_TYPE_FREE))
7440     {
7441         mergeWithNext = true;
7442     }
7443 
7444     VmaSuballocationList::iterator prevItem = suballocItem;
7445     if (suballocItem != m_Suballocations.begin())
7446     {
7447         --prevItem;
7448         if (prevItem->type == VMA_SUBALLOCATION_TYPE_FREE)
7449         {
7450             mergeWithPrev = true;
7451         }
7452     }
7453 
7454     if (mergeWithNext)
7455     {
7456         UnregisterFreeSuballocation(nextItem);
7457         MergeFreeWithNext(suballocItem);
7458     }
7459 
7460     if (mergeWithPrev)
7461     {
7462         UnregisterFreeSuballocation(prevItem);
7463         MergeFreeWithNext(prevItem);
7464         RegisterFreeSuballocation(prevItem);
7465         return prevItem;
7466     }
7467     else
7468     {
7469         RegisterFreeSuballocation(suballocItem);
7470         return suballocItem;
7471     }
7472 }
7473 
7474 void VmaBlockMetadata_Generic::RegisterFreeSuballocation(VmaSuballocationList::iterator item)
7475 {
7476     VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
7477     VMA_ASSERT(item->size > 0);
7478 
7479     // You may want to enable this validation at the beginning or at the end of
7480     // this function, depending on what do you want to check.
7481     VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
7482 
7483     if (m_FreeSuballocationsBySize.empty())
7484     {
7485         m_FreeSuballocationsBySize.push_back(item);
7486     }
7487     else
7488     {
7489         VmaVectorInsertSorted<VmaSuballocationItemSizeLess>(m_FreeSuballocationsBySize, item);
7490     }
7491 
7492     //VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
7493 }
7494 
7495 void VmaBlockMetadata_Generic::UnregisterFreeSuballocation(VmaSuballocationList::iterator item)
7496 {
7497     VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
7498     VMA_ASSERT(item->size > 0);
7499 
7500     // You may want to enable this validation at the beginning or at the end of
7501     // this function, depending on what do you want to check.
7502     VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
7503 
7504     VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
7505         m_FreeSuballocationsBySize.data(),
7506         m_FreeSuballocationsBySize.data() + m_FreeSuballocationsBySize.size(),
7507         item,
7508         VmaSuballocationItemSizeLess());
7509     for (size_t index = it - m_FreeSuballocationsBySize.data();
7510         index < m_FreeSuballocationsBySize.size();
7511         ++index)
7512     {
7513         if (m_FreeSuballocationsBySize[index] == item)
7514         {
7515             VmaVectorRemove(m_FreeSuballocationsBySize, index);
7516             return;
7517         }
7518         VMA_ASSERT((m_FreeSuballocationsBySize[index]->size == item->size) && "Not found.");
7519     }
7520     VMA_ASSERT(0 && "Not found.");
7521 
7522     //VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
7523 }
7524 #endif // _VMA_BLOCK_METADATA_GENERIC_FUNCTIONS
7525 #endif // _VMA_BLOCK_METADATA_GENERIC
7526 #endif // #if 0
7527 
7528 #ifndef _VMA_BLOCK_METADATA_LINEAR
7529 /*
7530 Allocations and their references in internal data structure look like this:
7531 
7532 if(m_2ndVectorMode == SECOND_VECTOR_EMPTY):
7533 
7534         0 +-------+
7535           |       |
7536           |       |
7537           |       |
7538           +-------+
7539           | Alloc |  1st[m_1stNullItemsBeginCount]
7540           +-------+
7541           | Alloc |  1st[m_1stNullItemsBeginCount + 1]
7542           +-------+
7543           |  ...  |
7544           +-------+
7545           | Alloc |  1st[1st.size() - 1]
7546           +-------+
7547           |       |
7548           |       |
7549           |       |
7550 GetSize() +-------+
7551 
7552 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER):
7553 
7554         0 +-------+
7555           | Alloc |  2nd[0]
7556           +-------+
7557           | Alloc |  2nd[1]
7558           +-------+
7559           |  ...  |
7560           +-------+
7561           | Alloc |  2nd[2nd.size() - 1]
7562           +-------+
7563           |       |
7564           |       |
7565           |       |
7566           +-------+
7567           | Alloc |  1st[m_1stNullItemsBeginCount]
7568           +-------+
7569           | Alloc |  1st[m_1stNullItemsBeginCount + 1]
7570           +-------+
7571           |  ...  |
7572           +-------+
7573           | Alloc |  1st[1st.size() - 1]
7574           +-------+
7575           |       |
7576 GetSize() +-------+
7577 
7578 if(m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK):
7579 
7580         0 +-------+
7581           |       |
7582           |       |
7583           |       |
7584           +-------+
7585           | Alloc |  1st[m_1stNullItemsBeginCount]
7586           +-------+
7587           | Alloc |  1st[m_1stNullItemsBeginCount + 1]
7588           +-------+
7589           |  ...  |
7590           +-------+
7591           | Alloc |  1st[1st.size() - 1]
7592           +-------+
7593           |       |
7594           |       |
7595           |       |
7596           +-------+
7597           | Alloc |  2nd[2nd.size() - 1]
7598           +-------+
7599           |  ...  |
7600           +-------+
7601           | Alloc |  2nd[1]
7602           +-------+
7603           | Alloc |  2nd[0]
7604 GetSize() +-------+
7605 
7606 */
7607 class VmaBlockMetadata_Linear : public VmaBlockMetadata
7608 {
7609     VMA_CLASS_NO_COPY(VmaBlockMetadata_Linear)
7610 public:
7611     VmaBlockMetadata_Linear(const VkAllocationCallbacks* pAllocationCallbacks,
7612         VkDeviceSize bufferImageGranularity, bool isVirtual);
7613     virtual ~VmaBlockMetadata_Linear() = default;
7614 
GetSumFreeSize()7615     VkDeviceSize GetSumFreeSize() const override { return m_SumFreeSize; }
IsEmpty()7616     bool IsEmpty() const override { return GetAllocationCount() == 0; }
GetAllocationOffset(VmaAllocHandle allocHandle)7617     VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const override { return (VkDeviceSize)allocHandle - 1; };
7618 
7619     void Init(VkDeviceSize size) override;
7620     bool Validate() const override;
7621     size_t GetAllocationCount() const override;
7622     size_t GetFreeRegionsCount() const override;
7623 
7624     void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const override;
7625     void AddStatistics(VmaStatistics& inoutStats) const override;
7626 
7627 #if VMA_STATS_STRING_ENABLED
7628     void PrintDetailedMap(class VmaJsonWriter& json) const override;
7629 #endif
7630 
7631     bool CreateAllocationRequest(
7632         VkDeviceSize allocSize,
7633         VkDeviceSize allocAlignment,
7634         bool upperAddress,
7635         VmaSuballocationType allocType,
7636         uint32_t strategy,
7637         VmaAllocationRequest* pAllocationRequest) override;
7638 
7639     VkResult CheckCorruption(const void* pBlockData) override;
7640 
7641     void Alloc(
7642         const VmaAllocationRequest& request,
7643         VmaSuballocationType type,
7644         void* userData) override;
7645 
7646     void Free(VmaAllocHandle allocHandle) override;
7647     void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) override;
7648     void* GetAllocationUserData(VmaAllocHandle allocHandle) const override;
7649     VmaAllocHandle GetAllocationListBegin() const override;
7650     VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const override;
7651     VkDeviceSize GetNextFreeRegionSize(VmaAllocHandle alloc) const override;
7652     void Clear() override;
7653     void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) override;
7654     void DebugLogAllAllocations() const override;
7655 
7656 private:
7657     /*
7658     There are two suballocation vectors, used in ping-pong way.
7659     The one with index m_1stVectorIndex is called 1st.
7660     The one with index (m_1stVectorIndex ^ 1) is called 2nd.
7661     2nd can be non-empty only when 1st is not empty.
7662     When 2nd is not empty, m_2ndVectorMode indicates its mode of operation.
7663     */
7664     typedef VmaVector<VmaSuballocation, VmaStlAllocator<VmaSuballocation>> SuballocationVectorType;
7665 
7666     enum SECOND_VECTOR_MODE
7667     {
7668         SECOND_VECTOR_EMPTY,
7669         /*
7670         Suballocations in 2nd vector are created later than the ones in 1st, but they
7671         all have smaller offset.
7672         */
7673         SECOND_VECTOR_RING_BUFFER,
7674         /*
7675         Suballocations in 2nd vector are upper side of double stack.
7676         They all have offsets higher than those in 1st vector.
7677         Top of this stack means smaller offsets, but higher indices in this vector.
7678         */
7679         SECOND_VECTOR_DOUBLE_STACK,
7680     };
7681 
7682     VkDeviceSize m_SumFreeSize;
7683     SuballocationVectorType m_Suballocations0, m_Suballocations1;
7684     uint32_t m_1stVectorIndex;
7685     SECOND_VECTOR_MODE m_2ndVectorMode;
7686     // Number of items in 1st vector with hAllocation = null at the beginning.
7687     size_t m_1stNullItemsBeginCount;
7688     // Number of other items in 1st vector with hAllocation = null somewhere in the middle.
7689     size_t m_1stNullItemsMiddleCount;
7690     // Number of items in 2nd vector with hAllocation = null.
7691     size_t m_2ndNullItemsCount;
7692 
AccessSuballocations1st()7693     SuballocationVectorType& AccessSuballocations1st() { return m_1stVectorIndex ? m_Suballocations1 : m_Suballocations0; }
AccessSuballocations2nd()7694     SuballocationVectorType& AccessSuballocations2nd() { return m_1stVectorIndex ? m_Suballocations0 : m_Suballocations1; }
AccessSuballocations1st()7695     const SuballocationVectorType& AccessSuballocations1st() const { return m_1stVectorIndex ? m_Suballocations1 : m_Suballocations0; }
AccessSuballocations2nd()7696     const SuballocationVectorType& AccessSuballocations2nd() const { return m_1stVectorIndex ? m_Suballocations0 : m_Suballocations1; }
7697 
7698     VmaSuballocation& FindSuballocation(VkDeviceSize offset) const;
7699     bool ShouldCompact1st() const;
7700     void CleanupAfterFree();
7701 
7702     bool CreateAllocationRequest_LowerAddress(
7703         VkDeviceSize allocSize,
7704         VkDeviceSize allocAlignment,
7705         VmaSuballocationType allocType,
7706         uint32_t strategy,
7707         VmaAllocationRequest* pAllocationRequest);
7708     bool CreateAllocationRequest_UpperAddress(
7709         VkDeviceSize allocSize,
7710         VkDeviceSize allocAlignment,
7711         VmaSuballocationType allocType,
7712         uint32_t strategy,
7713         VmaAllocationRequest* pAllocationRequest);
7714 };
7715 
7716 #ifndef _VMA_BLOCK_METADATA_LINEAR_FUNCTIONS
VmaBlockMetadata_Linear(const VkAllocationCallbacks * pAllocationCallbacks,VkDeviceSize bufferImageGranularity,bool isVirtual)7717 VmaBlockMetadata_Linear::VmaBlockMetadata_Linear(const VkAllocationCallbacks* pAllocationCallbacks,
7718     VkDeviceSize bufferImageGranularity, bool isVirtual)
7719     : VmaBlockMetadata(pAllocationCallbacks, bufferImageGranularity, isVirtual),
7720     m_SumFreeSize(0),
7721     m_Suballocations0(VmaStlAllocator<VmaSuballocation>(pAllocationCallbacks)),
7722     m_Suballocations1(VmaStlAllocator<VmaSuballocation>(pAllocationCallbacks)),
7723     m_1stVectorIndex(0),
7724     m_2ndVectorMode(SECOND_VECTOR_EMPTY),
7725     m_1stNullItemsBeginCount(0),
7726     m_1stNullItemsMiddleCount(0),
7727     m_2ndNullItemsCount(0) {}
7728 
Init(VkDeviceSize size)7729 void VmaBlockMetadata_Linear::Init(VkDeviceSize size)
7730 {
7731     VmaBlockMetadata::Init(size);
7732     m_SumFreeSize = size;
7733 }
7734 
Validate()7735 bool VmaBlockMetadata_Linear::Validate() const
7736 {
7737     const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
7738     const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
7739 
7740     VMA_VALIDATE(suballocations2nd.empty() == (m_2ndVectorMode == SECOND_VECTOR_EMPTY));
7741     VMA_VALIDATE(!suballocations1st.empty() ||
7742         suballocations2nd.empty() ||
7743         m_2ndVectorMode != SECOND_VECTOR_RING_BUFFER);
7744 
7745     if (!suballocations1st.empty())
7746     {
7747         // Null item at the beginning should be accounted into m_1stNullItemsBeginCount.
7748         VMA_VALIDATE(suballocations1st[m_1stNullItemsBeginCount].type != VMA_SUBALLOCATION_TYPE_FREE);
7749         // Null item at the end should be just pop_back().
7750         VMA_VALIDATE(suballocations1st.back().type != VMA_SUBALLOCATION_TYPE_FREE);
7751     }
7752     if (!suballocations2nd.empty())
7753     {
7754         // Null item at the end should be just pop_back().
7755         VMA_VALIDATE(suballocations2nd.back().type != VMA_SUBALLOCATION_TYPE_FREE);
7756     }
7757 
7758     VMA_VALIDATE(m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount <= suballocations1st.size());
7759     VMA_VALIDATE(m_2ndNullItemsCount <= suballocations2nd.size());
7760 
7761     VkDeviceSize sumUsedSize = 0;
7762     const size_t suballoc1stCount = suballocations1st.size();
7763     const VkDeviceSize debugMargin = GetDebugMargin();
7764     VkDeviceSize offset = 0;
7765 
7766     if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
7767     {
7768         const size_t suballoc2ndCount = suballocations2nd.size();
7769         size_t nullItem2ndCount = 0;
7770         for (size_t i = 0; i < suballoc2ndCount; ++i)
7771         {
7772             const VmaSuballocation& suballoc = suballocations2nd[i];
7773             const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
7774 
7775             VmaAllocation const alloc = (VmaAllocation)suballoc.userData;
7776             if (!IsVirtual())
7777             {
7778                 VMA_VALIDATE(currFree == (alloc == VK_NULL_HANDLE));
7779             }
7780             VMA_VALIDATE(suballoc.offset >= offset);
7781 
7782             if (!currFree)
7783             {
7784                 if (!IsVirtual())
7785                 {
7786                     VMA_VALIDATE((VkDeviceSize)alloc->GetAllocHandle() == suballoc.offset + 1);
7787                     VMA_VALIDATE(alloc->GetSize() == suballoc.size);
7788                 }
7789                 sumUsedSize += suballoc.size;
7790             }
7791             else
7792             {
7793                 ++nullItem2ndCount;
7794             }
7795 
7796             offset = suballoc.offset + suballoc.size + debugMargin;
7797         }
7798 
7799         VMA_VALIDATE(nullItem2ndCount == m_2ndNullItemsCount);
7800     }
7801 
7802     for (size_t i = 0; i < m_1stNullItemsBeginCount; ++i)
7803     {
7804         const VmaSuballocation& suballoc = suballocations1st[i];
7805         VMA_VALIDATE(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE &&
7806             suballoc.userData == VMA_NULL);
7807     }
7808 
7809     size_t nullItem1stCount = m_1stNullItemsBeginCount;
7810 
7811     for (size_t i = m_1stNullItemsBeginCount; i < suballoc1stCount; ++i)
7812     {
7813         const VmaSuballocation& suballoc = suballocations1st[i];
7814         const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
7815 
7816         VmaAllocation const alloc = (VmaAllocation)suballoc.userData;
7817         if (!IsVirtual())
7818         {
7819             VMA_VALIDATE(currFree == (alloc == VK_NULL_HANDLE));
7820         }
7821         VMA_VALIDATE(suballoc.offset >= offset);
7822         VMA_VALIDATE(i >= m_1stNullItemsBeginCount || currFree);
7823 
7824         if (!currFree)
7825         {
7826             if (!IsVirtual())
7827             {
7828                 VMA_VALIDATE((VkDeviceSize)alloc->GetAllocHandle() == suballoc.offset + 1);
7829                 VMA_VALIDATE(alloc->GetSize() == suballoc.size);
7830             }
7831             sumUsedSize += suballoc.size;
7832         }
7833         else
7834         {
7835             ++nullItem1stCount;
7836         }
7837 
7838         offset = suballoc.offset + suballoc.size + debugMargin;
7839     }
7840     VMA_VALIDATE(nullItem1stCount == m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount);
7841 
7842     if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
7843     {
7844         const size_t suballoc2ndCount = suballocations2nd.size();
7845         size_t nullItem2ndCount = 0;
7846         for (size_t i = suballoc2ndCount; i--; )
7847         {
7848             const VmaSuballocation& suballoc = suballocations2nd[i];
7849             const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
7850 
7851             VmaAllocation const alloc = (VmaAllocation)suballoc.userData;
7852             if (!IsVirtual())
7853             {
7854                 VMA_VALIDATE(currFree == (alloc == VK_NULL_HANDLE));
7855             }
7856             VMA_VALIDATE(suballoc.offset >= offset);
7857 
7858             if (!currFree)
7859             {
7860                 if (!IsVirtual())
7861                 {
7862                     VMA_VALIDATE((VkDeviceSize)alloc->GetAllocHandle() == suballoc.offset + 1);
7863                     VMA_VALIDATE(alloc->GetSize() == suballoc.size);
7864                 }
7865                 sumUsedSize += suballoc.size;
7866             }
7867             else
7868             {
7869                 ++nullItem2ndCount;
7870             }
7871 
7872             offset = suballoc.offset + suballoc.size + debugMargin;
7873         }
7874 
7875         VMA_VALIDATE(nullItem2ndCount == m_2ndNullItemsCount);
7876     }
7877 
7878     VMA_VALIDATE(offset <= GetSize());
7879     VMA_VALIDATE(m_SumFreeSize == GetSize() - sumUsedSize);
7880 
7881     return true;
7882 }
7883 
GetAllocationCount()7884 size_t VmaBlockMetadata_Linear::GetAllocationCount() const
7885 {
7886     return AccessSuballocations1st().size() - m_1stNullItemsBeginCount - m_1stNullItemsMiddleCount +
7887         AccessSuballocations2nd().size() - m_2ndNullItemsCount;
7888 }
7889 
GetFreeRegionsCount()7890 size_t VmaBlockMetadata_Linear::GetFreeRegionsCount() const
7891 {
7892     // Function only used for defragmentation, which is disabled for this algorithm
7893     VMA_ASSERT(0);
7894     return SIZE_MAX;
7895 }
7896 
AddDetailedStatistics(VmaDetailedStatistics & inoutStats)7897 void VmaBlockMetadata_Linear::AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const
7898 {
7899     const VkDeviceSize size = GetSize();
7900     const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
7901     const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
7902     const size_t suballoc1stCount = suballocations1st.size();
7903     const size_t suballoc2ndCount = suballocations2nd.size();
7904 
7905     inoutStats.statistics.blockCount++;
7906     inoutStats.statistics.blockBytes += size;
7907 
7908     VkDeviceSize lastOffset = 0;
7909 
7910     if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
7911     {
7912         const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
7913         size_t nextAlloc2ndIndex = 0;
7914         while (lastOffset < freeSpace2ndTo1stEnd)
7915         {
7916             // Find next non-null allocation or move nextAllocIndex to the end.
7917             while (nextAlloc2ndIndex < suballoc2ndCount &&
7918                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
7919             {
7920                 ++nextAlloc2ndIndex;
7921             }
7922 
7923             // Found non-null allocation.
7924             if (nextAlloc2ndIndex < suballoc2ndCount)
7925             {
7926                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
7927 
7928                 // 1. Process free space before this allocation.
7929                 if (lastOffset < suballoc.offset)
7930                 {
7931                     // There is free space from lastOffset to suballoc.offset.
7932                     const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
7933                     VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
7934                 }
7935 
7936                 // 2. Process this allocation.
7937                 // There is allocation with suballoc.offset, suballoc.size.
7938                 VmaAddDetailedStatisticsAllocation(inoutStats, suballoc.size);
7939 
7940                 // 3. Prepare for next iteration.
7941                 lastOffset = suballoc.offset + suballoc.size;
7942                 ++nextAlloc2ndIndex;
7943             }
7944             // We are at the end.
7945             else
7946             {
7947                 // There is free space from lastOffset to freeSpace2ndTo1stEnd.
7948                 if (lastOffset < freeSpace2ndTo1stEnd)
7949                 {
7950                     const VkDeviceSize unusedRangeSize = freeSpace2ndTo1stEnd - lastOffset;
7951                     VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
7952                 }
7953 
7954                 // End of loop.
7955                 lastOffset = freeSpace2ndTo1stEnd;
7956             }
7957         }
7958     }
7959 
7960     size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
7961     const VkDeviceSize freeSpace1stTo2ndEnd =
7962         m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
7963     while (lastOffset < freeSpace1stTo2ndEnd)
7964     {
7965         // Find next non-null allocation or move nextAllocIndex to the end.
7966         while (nextAlloc1stIndex < suballoc1stCount &&
7967             suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
7968         {
7969             ++nextAlloc1stIndex;
7970         }
7971 
7972         // Found non-null allocation.
7973         if (nextAlloc1stIndex < suballoc1stCount)
7974         {
7975             const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
7976 
7977             // 1. Process free space before this allocation.
7978             if (lastOffset < suballoc.offset)
7979             {
7980                 // There is free space from lastOffset to suballoc.offset.
7981                 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
7982                 VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
7983             }
7984 
7985             // 2. Process this allocation.
7986             // There is allocation with suballoc.offset, suballoc.size.
7987             VmaAddDetailedStatisticsAllocation(inoutStats, suballoc.size);
7988 
7989             // 3. Prepare for next iteration.
7990             lastOffset = suballoc.offset + suballoc.size;
7991             ++nextAlloc1stIndex;
7992         }
7993         // We are at the end.
7994         else
7995         {
7996             // There is free space from lastOffset to freeSpace1stTo2ndEnd.
7997             if (lastOffset < freeSpace1stTo2ndEnd)
7998             {
7999                 const VkDeviceSize unusedRangeSize = freeSpace1stTo2ndEnd - lastOffset;
8000                 VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
8001             }
8002 
8003             // End of loop.
8004             lastOffset = freeSpace1stTo2ndEnd;
8005         }
8006     }
8007 
8008     if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
8009     {
8010         size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
8011         while (lastOffset < size)
8012         {
8013             // Find next non-null allocation or move nextAllocIndex to the end.
8014             while (nextAlloc2ndIndex != SIZE_MAX &&
8015                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
8016             {
8017                 --nextAlloc2ndIndex;
8018             }
8019 
8020             // Found non-null allocation.
8021             if (nextAlloc2ndIndex != SIZE_MAX)
8022             {
8023                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
8024 
8025                 // 1. Process free space before this allocation.
8026                 if (lastOffset < suballoc.offset)
8027                 {
8028                     // There is free space from lastOffset to suballoc.offset.
8029                     const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
8030                     VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
8031                 }
8032 
8033                 // 2. Process this allocation.
8034                 // There is allocation with suballoc.offset, suballoc.size.
8035                 VmaAddDetailedStatisticsAllocation(inoutStats, suballoc.size);
8036 
8037                 // 3. Prepare for next iteration.
8038                 lastOffset = suballoc.offset + suballoc.size;
8039                 --nextAlloc2ndIndex;
8040             }
8041             // We are at the end.
8042             else
8043             {
8044                 // There is free space from lastOffset to size.
8045                 if (lastOffset < size)
8046                 {
8047                     const VkDeviceSize unusedRangeSize = size - lastOffset;
8048                     VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
8049                 }
8050 
8051                 // End of loop.
8052                 lastOffset = size;
8053             }
8054         }
8055     }
8056 }
8057 
AddStatistics(VmaStatistics & inoutStats)8058 void VmaBlockMetadata_Linear::AddStatistics(VmaStatistics& inoutStats) const
8059 {
8060     const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8061     const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8062     const VkDeviceSize size = GetSize();
8063     const size_t suballoc1stCount = suballocations1st.size();
8064     const size_t suballoc2ndCount = suballocations2nd.size();
8065 
8066     inoutStats.blockCount++;
8067     inoutStats.blockBytes += size;
8068     inoutStats.allocationBytes += size - m_SumFreeSize;
8069 
8070     VkDeviceSize lastOffset = 0;
8071 
8072     if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
8073     {
8074         const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
8075         size_t nextAlloc2ndIndex = m_1stNullItemsBeginCount;
8076         while (lastOffset < freeSpace2ndTo1stEnd)
8077         {
8078             // Find next non-null allocation or move nextAlloc2ndIndex to the end.
8079             while (nextAlloc2ndIndex < suballoc2ndCount &&
8080                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
8081             {
8082                 ++nextAlloc2ndIndex;
8083             }
8084 
8085             // Found non-null allocation.
8086             if (nextAlloc2ndIndex < suballoc2ndCount)
8087             {
8088                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
8089 
8090                 // 1. Process free space before this allocation.
8091                 if (lastOffset < suballoc.offset)
8092                 {
8093                     // There is free space from lastOffset to suballoc.offset.
8094                     const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
8095                 }
8096 
8097                 // 2. Process this allocation.
8098                 // There is allocation with suballoc.offset, suballoc.size.
8099                 ++inoutStats.allocationCount;
8100 
8101                 // 3. Prepare for next iteration.
8102                 lastOffset = suballoc.offset + suballoc.size;
8103                 ++nextAlloc2ndIndex;
8104             }
8105             // We are at the end.
8106             else
8107             {
8108                 if (lastOffset < freeSpace2ndTo1stEnd)
8109                 {
8110                     // There is free space from lastOffset to freeSpace2ndTo1stEnd.
8111                     const VkDeviceSize unusedRangeSize = freeSpace2ndTo1stEnd - lastOffset;
8112                 }
8113 
8114                 // End of loop.
8115                 lastOffset = freeSpace2ndTo1stEnd;
8116             }
8117         }
8118     }
8119 
8120     size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
8121     const VkDeviceSize freeSpace1stTo2ndEnd =
8122         m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
8123     while (lastOffset < freeSpace1stTo2ndEnd)
8124     {
8125         // Find next non-null allocation or move nextAllocIndex to the end.
8126         while (nextAlloc1stIndex < suballoc1stCount &&
8127             suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
8128         {
8129             ++nextAlloc1stIndex;
8130         }
8131 
8132         // Found non-null allocation.
8133         if (nextAlloc1stIndex < suballoc1stCount)
8134         {
8135             const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
8136 
8137             // 1. Process free space before this allocation.
8138             if (lastOffset < suballoc.offset)
8139             {
8140                 // There is free space from lastOffset to suballoc.offset.
8141                 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
8142             }
8143 
8144             // 2. Process this allocation.
8145             // There is allocation with suballoc.offset, suballoc.size.
8146             ++inoutStats.allocationCount;
8147 
8148             // 3. Prepare for next iteration.
8149             lastOffset = suballoc.offset + suballoc.size;
8150             ++nextAlloc1stIndex;
8151         }
8152         // We are at the end.
8153         else
8154         {
8155             if (lastOffset < freeSpace1stTo2ndEnd)
8156             {
8157                 // There is free space from lastOffset to freeSpace1stTo2ndEnd.
8158                 const VkDeviceSize unusedRangeSize = freeSpace1stTo2ndEnd - lastOffset;
8159             }
8160 
8161             // End of loop.
8162             lastOffset = freeSpace1stTo2ndEnd;
8163         }
8164     }
8165 
8166     if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
8167     {
8168         size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
8169         while (lastOffset < size)
8170         {
8171             // Find next non-null allocation or move nextAlloc2ndIndex to the end.
8172             while (nextAlloc2ndIndex != SIZE_MAX &&
8173                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
8174             {
8175                 --nextAlloc2ndIndex;
8176             }
8177 
8178             // Found non-null allocation.
8179             if (nextAlloc2ndIndex != SIZE_MAX)
8180             {
8181                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
8182 
8183                 // 1. Process free space before this allocation.
8184                 if (lastOffset < suballoc.offset)
8185                 {
8186                     // There is free space from lastOffset to suballoc.offset.
8187                     const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
8188                 }
8189 
8190                 // 2. Process this allocation.
8191                 // There is allocation with suballoc.offset, suballoc.size.
8192                 ++inoutStats.allocationCount;
8193 
8194                 // 3. Prepare for next iteration.
8195                 lastOffset = suballoc.offset + suballoc.size;
8196                 --nextAlloc2ndIndex;
8197             }
8198             // We are at the end.
8199             else
8200             {
8201                 if (lastOffset < size)
8202                 {
8203                     // There is free space from lastOffset to size.
8204                     const VkDeviceSize unusedRangeSize = size - lastOffset;
8205                 }
8206 
8207                 // End of loop.
8208                 lastOffset = size;
8209             }
8210         }
8211     }
8212 }
8213 
8214 #if VMA_STATS_STRING_ENABLED
PrintDetailedMap(class VmaJsonWriter & json)8215 void VmaBlockMetadata_Linear::PrintDetailedMap(class VmaJsonWriter& json) const
8216 {
8217     const VkDeviceSize size = GetSize();
8218     const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8219     const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8220     const size_t suballoc1stCount = suballocations1st.size();
8221     const size_t suballoc2ndCount = suballocations2nd.size();
8222 
8223     // FIRST PASS
8224 
8225     size_t unusedRangeCount = 0;
8226     VkDeviceSize usedBytes = 0;
8227 
8228     VkDeviceSize lastOffset = 0;
8229 
8230     size_t alloc2ndCount = 0;
8231     if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
8232     {
8233         const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
8234         size_t nextAlloc2ndIndex = 0;
8235         while (lastOffset < freeSpace2ndTo1stEnd)
8236         {
8237             // Find next non-null allocation or move nextAlloc2ndIndex to the end.
8238             while (nextAlloc2ndIndex < suballoc2ndCount &&
8239                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
8240             {
8241                 ++nextAlloc2ndIndex;
8242             }
8243 
8244             // Found non-null allocation.
8245             if (nextAlloc2ndIndex < suballoc2ndCount)
8246             {
8247                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
8248 
8249                 // 1. Process free space before this allocation.
8250                 if (lastOffset < suballoc.offset)
8251                 {
8252                     // There is free space from lastOffset to suballoc.offset.
8253                     ++unusedRangeCount;
8254                 }
8255 
8256                 // 2. Process this allocation.
8257                 // There is allocation with suballoc.offset, suballoc.size.
8258                 ++alloc2ndCount;
8259                 usedBytes += suballoc.size;
8260 
8261                 // 3. Prepare for next iteration.
8262                 lastOffset = suballoc.offset + suballoc.size;
8263                 ++nextAlloc2ndIndex;
8264             }
8265             // We are at the end.
8266             else
8267             {
8268                 if (lastOffset < freeSpace2ndTo1stEnd)
8269                 {
8270                     // There is free space from lastOffset to freeSpace2ndTo1stEnd.
8271                     ++unusedRangeCount;
8272                 }
8273 
8274                 // End of loop.
8275                 lastOffset = freeSpace2ndTo1stEnd;
8276             }
8277         }
8278     }
8279 
8280     size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
8281     size_t alloc1stCount = 0;
8282     const VkDeviceSize freeSpace1stTo2ndEnd =
8283         m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
8284     while (lastOffset < freeSpace1stTo2ndEnd)
8285     {
8286         // Find next non-null allocation or move nextAllocIndex to the end.
8287         while (nextAlloc1stIndex < suballoc1stCount &&
8288             suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
8289         {
8290             ++nextAlloc1stIndex;
8291         }
8292 
8293         // Found non-null allocation.
8294         if (nextAlloc1stIndex < suballoc1stCount)
8295         {
8296             const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
8297 
8298             // 1. Process free space before this allocation.
8299             if (lastOffset < suballoc.offset)
8300             {
8301                 // There is free space from lastOffset to suballoc.offset.
8302                 ++unusedRangeCount;
8303             }
8304 
8305             // 2. Process this allocation.
8306             // There is allocation with suballoc.offset, suballoc.size.
8307             ++alloc1stCount;
8308             usedBytes += suballoc.size;
8309 
8310             // 3. Prepare for next iteration.
8311             lastOffset = suballoc.offset + suballoc.size;
8312             ++nextAlloc1stIndex;
8313         }
8314         // We are at the end.
8315         else
8316         {
8317             if (lastOffset < size)
8318             {
8319                 // There is free space from lastOffset to freeSpace1stTo2ndEnd.
8320                 ++unusedRangeCount;
8321             }
8322 
8323             // End of loop.
8324             lastOffset = freeSpace1stTo2ndEnd;
8325         }
8326     }
8327 
8328     if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
8329     {
8330         size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
8331         while (lastOffset < size)
8332         {
8333             // Find next non-null allocation or move nextAlloc2ndIndex to the end.
8334             while (nextAlloc2ndIndex != SIZE_MAX &&
8335                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
8336             {
8337                 --nextAlloc2ndIndex;
8338             }
8339 
8340             // Found non-null allocation.
8341             if (nextAlloc2ndIndex != SIZE_MAX)
8342             {
8343                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
8344 
8345                 // 1. Process free space before this allocation.
8346                 if (lastOffset < suballoc.offset)
8347                 {
8348                     // There is free space from lastOffset to suballoc.offset.
8349                     ++unusedRangeCount;
8350                 }
8351 
8352                 // 2. Process this allocation.
8353                 // There is allocation with suballoc.offset, suballoc.size.
8354                 ++alloc2ndCount;
8355                 usedBytes += suballoc.size;
8356 
8357                 // 3. Prepare for next iteration.
8358                 lastOffset = suballoc.offset + suballoc.size;
8359                 --nextAlloc2ndIndex;
8360             }
8361             // We are at the end.
8362             else
8363             {
8364                 if (lastOffset < size)
8365                 {
8366                     // There is free space from lastOffset to size.
8367                     ++unusedRangeCount;
8368                 }
8369 
8370                 // End of loop.
8371                 lastOffset = size;
8372             }
8373         }
8374     }
8375 
8376     const VkDeviceSize unusedBytes = size - usedBytes;
8377     PrintDetailedMap_Begin(json, unusedBytes, alloc1stCount + alloc2ndCount, unusedRangeCount);
8378 
8379     // SECOND PASS
8380     lastOffset = 0;
8381 
8382     if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
8383     {
8384         const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
8385         size_t nextAlloc2ndIndex = 0;
8386         while (lastOffset < freeSpace2ndTo1stEnd)
8387         {
8388             // Find next non-null allocation or move nextAlloc2ndIndex to the end.
8389             while (nextAlloc2ndIndex < suballoc2ndCount &&
8390                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
8391             {
8392                 ++nextAlloc2ndIndex;
8393             }
8394 
8395             // Found non-null allocation.
8396             if (nextAlloc2ndIndex < suballoc2ndCount)
8397             {
8398                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
8399 
8400                 // 1. Process free space before this allocation.
8401                 if (lastOffset < suballoc.offset)
8402                 {
8403                     // There is free space from lastOffset to suballoc.offset.
8404                     const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
8405                     PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
8406                 }
8407 
8408                 // 2. Process this allocation.
8409                 // There is allocation with suballoc.offset, suballoc.size.
8410                 PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.size, suballoc.userData);
8411 
8412                 // 3. Prepare for next iteration.
8413                 lastOffset = suballoc.offset + suballoc.size;
8414                 ++nextAlloc2ndIndex;
8415             }
8416             // We are at the end.
8417             else
8418             {
8419                 if (lastOffset < freeSpace2ndTo1stEnd)
8420                 {
8421                     // There is free space from lastOffset to freeSpace2ndTo1stEnd.
8422                     const VkDeviceSize unusedRangeSize = freeSpace2ndTo1stEnd - lastOffset;
8423                     PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
8424                 }
8425 
8426                 // End of loop.
8427                 lastOffset = freeSpace2ndTo1stEnd;
8428             }
8429         }
8430     }
8431 
8432     nextAlloc1stIndex = m_1stNullItemsBeginCount;
8433     while (lastOffset < freeSpace1stTo2ndEnd)
8434     {
8435         // Find next non-null allocation or move nextAllocIndex to the end.
8436         while (nextAlloc1stIndex < suballoc1stCount &&
8437             suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
8438         {
8439             ++nextAlloc1stIndex;
8440         }
8441 
8442         // Found non-null allocation.
8443         if (nextAlloc1stIndex < suballoc1stCount)
8444         {
8445             const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
8446 
8447             // 1. Process free space before this allocation.
8448             if (lastOffset < suballoc.offset)
8449             {
8450                 // There is free space from lastOffset to suballoc.offset.
8451                 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
8452                 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
8453             }
8454 
8455             // 2. Process this allocation.
8456             // There is allocation with suballoc.offset, suballoc.size.
8457             PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.size, suballoc.userData);
8458 
8459             // 3. Prepare for next iteration.
8460             lastOffset = suballoc.offset + suballoc.size;
8461             ++nextAlloc1stIndex;
8462         }
8463         // We are at the end.
8464         else
8465         {
8466             if (lastOffset < freeSpace1stTo2ndEnd)
8467             {
8468                 // There is free space from lastOffset to freeSpace1stTo2ndEnd.
8469                 const VkDeviceSize unusedRangeSize = freeSpace1stTo2ndEnd - lastOffset;
8470                 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
8471             }
8472 
8473             // End of loop.
8474             lastOffset = freeSpace1stTo2ndEnd;
8475         }
8476     }
8477 
8478     if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
8479     {
8480         size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
8481         while (lastOffset < size)
8482         {
8483             // Find next non-null allocation or move nextAlloc2ndIndex to the end.
8484             while (nextAlloc2ndIndex != SIZE_MAX &&
8485                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
8486             {
8487                 --nextAlloc2ndIndex;
8488             }
8489 
8490             // Found non-null allocation.
8491             if (nextAlloc2ndIndex != SIZE_MAX)
8492             {
8493                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
8494 
8495                 // 1. Process free space before this allocation.
8496                 if (lastOffset < suballoc.offset)
8497                 {
8498                     // There is free space from lastOffset to suballoc.offset.
8499                     const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
8500                     PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
8501                 }
8502 
8503                 // 2. Process this allocation.
8504                 // There is allocation with suballoc.offset, suballoc.size.
8505                 PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.size, suballoc.userData);
8506 
8507                 // 3. Prepare for next iteration.
8508                 lastOffset = suballoc.offset + suballoc.size;
8509                 --nextAlloc2ndIndex;
8510             }
8511             // We are at the end.
8512             else
8513             {
8514                 if (lastOffset < size)
8515                 {
8516                     // There is free space from lastOffset to size.
8517                     const VkDeviceSize unusedRangeSize = size - lastOffset;
8518                     PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
8519                 }
8520 
8521                 // End of loop.
8522                 lastOffset = size;
8523             }
8524         }
8525     }
8526 
8527     PrintDetailedMap_End(json);
8528 }
8529 #endif // VMA_STATS_STRING_ENABLED
8530 
CreateAllocationRequest(VkDeviceSize allocSize,VkDeviceSize allocAlignment,bool upperAddress,VmaSuballocationType allocType,uint32_t strategy,VmaAllocationRequest * pAllocationRequest)8531 bool VmaBlockMetadata_Linear::CreateAllocationRequest(
8532     VkDeviceSize allocSize,
8533     VkDeviceSize allocAlignment,
8534     bool upperAddress,
8535     VmaSuballocationType allocType,
8536     uint32_t strategy,
8537     VmaAllocationRequest* pAllocationRequest)
8538 {
8539     VMA_ASSERT(allocSize > 0);
8540     VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
8541     VMA_ASSERT(pAllocationRequest != VMA_NULL);
8542     VMA_HEAVY_ASSERT(Validate());
8543     pAllocationRequest->size = allocSize;
8544     return upperAddress ?
8545         CreateAllocationRequest_UpperAddress(
8546             allocSize, allocAlignment, allocType, strategy, pAllocationRequest) :
8547         CreateAllocationRequest_LowerAddress(
8548             allocSize, allocAlignment, allocType, strategy, pAllocationRequest);
8549 }
8550 
CheckCorruption(const void * pBlockData)8551 VkResult VmaBlockMetadata_Linear::CheckCorruption(const void* pBlockData)
8552 {
8553     VMA_ASSERT(!IsVirtual());
8554     SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8555     for (size_t i = m_1stNullItemsBeginCount, count = suballocations1st.size(); i < count; ++i)
8556     {
8557         const VmaSuballocation& suballoc = suballocations1st[i];
8558         if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
8559         {
8560             if (!VmaValidateMagicValue(pBlockData, suballoc.offset + suballoc.size))
8561             {
8562                 VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
8563                 return VK_ERROR_UNKNOWN_COPY;
8564             }
8565         }
8566     }
8567 
8568     SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8569     for (size_t i = 0, count = suballocations2nd.size(); i < count; ++i)
8570     {
8571         const VmaSuballocation& suballoc = suballocations2nd[i];
8572         if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
8573         {
8574             if (!VmaValidateMagicValue(pBlockData, suballoc.offset + suballoc.size))
8575             {
8576                 VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
8577                 return VK_ERROR_UNKNOWN_COPY;
8578             }
8579         }
8580     }
8581 
8582     return VK_SUCCESS;
8583 }
8584 
Alloc(const VmaAllocationRequest & request,VmaSuballocationType type,void * userData)8585 void VmaBlockMetadata_Linear::Alloc(
8586     const VmaAllocationRequest& request,
8587     VmaSuballocationType type,
8588     void* userData)
8589 {
8590     const VkDeviceSize offset = (VkDeviceSize)request.allocHandle - 1;
8591     const VmaSuballocation newSuballoc = { offset, request.size, userData, type };
8592 
8593     switch (request.type)
8594     {
8595     case VmaAllocationRequestType::UpperAddress:
8596     {
8597         VMA_ASSERT(m_2ndVectorMode != SECOND_VECTOR_RING_BUFFER &&
8598             "CRITICAL ERROR: Trying to use linear allocator as double stack while it was already used as ring buffer.");
8599         SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8600         suballocations2nd.push_back(newSuballoc);
8601         m_2ndVectorMode = SECOND_VECTOR_DOUBLE_STACK;
8602     }
8603     break;
8604     case VmaAllocationRequestType::EndOf1st:
8605     {
8606         SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8607 
8608         VMA_ASSERT(suballocations1st.empty() ||
8609             offset >= suballocations1st.back().offset + suballocations1st.back().size);
8610         // Check if it fits before the end of the block.
8611         VMA_ASSERT(offset + request.size <= GetSize());
8612 
8613         suballocations1st.push_back(newSuballoc);
8614     }
8615     break;
8616     case VmaAllocationRequestType::EndOf2nd:
8617     {
8618         SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8619         // New allocation at the end of 2-part ring buffer, so before first allocation from 1st vector.
8620         VMA_ASSERT(!suballocations1st.empty() &&
8621             offset + request.size <= suballocations1st[m_1stNullItemsBeginCount].offset);
8622         SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8623 
8624         switch (m_2ndVectorMode)
8625         {
8626         case SECOND_VECTOR_EMPTY:
8627             // First allocation from second part ring buffer.
8628             VMA_ASSERT(suballocations2nd.empty());
8629             m_2ndVectorMode = SECOND_VECTOR_RING_BUFFER;
8630             break;
8631         case SECOND_VECTOR_RING_BUFFER:
8632             // 2-part ring buffer is already started.
8633             VMA_ASSERT(!suballocations2nd.empty());
8634             break;
8635         case SECOND_VECTOR_DOUBLE_STACK:
8636             VMA_ASSERT(0 && "CRITICAL ERROR: Trying to use linear allocator as ring buffer while it was already used as double stack.");
8637             break;
8638         default:
8639             VMA_ASSERT(0);
8640         }
8641 
8642         suballocations2nd.push_back(newSuballoc);
8643     }
8644     break;
8645     default:
8646         VMA_ASSERT(0 && "CRITICAL INTERNAL ERROR.");
8647     }
8648 
8649     m_SumFreeSize -= newSuballoc.size;
8650 }
8651 
Free(VmaAllocHandle allocHandle)8652 void VmaBlockMetadata_Linear::Free(VmaAllocHandle allocHandle)
8653 {
8654     SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8655     SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8656     VkDeviceSize offset = (VkDeviceSize)allocHandle - 1;
8657 
8658     if (!suballocations1st.empty())
8659     {
8660         // First allocation: Mark it as next empty at the beginning.
8661         VmaSuballocation& firstSuballoc = suballocations1st[m_1stNullItemsBeginCount];
8662         if (firstSuballoc.offset == offset)
8663         {
8664             firstSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
8665             firstSuballoc.userData = VMA_NULL;
8666             m_SumFreeSize += firstSuballoc.size;
8667             ++m_1stNullItemsBeginCount;
8668             CleanupAfterFree();
8669             return;
8670         }
8671     }
8672 
8673     // Last allocation in 2-part ring buffer or top of upper stack (same logic).
8674     if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ||
8675         m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
8676     {
8677         VmaSuballocation& lastSuballoc = suballocations2nd.back();
8678         if (lastSuballoc.offset == offset)
8679         {
8680             m_SumFreeSize += lastSuballoc.size;
8681             suballocations2nd.pop_back();
8682             CleanupAfterFree();
8683             return;
8684         }
8685     }
8686     // Last allocation in 1st vector.
8687     else if (m_2ndVectorMode == SECOND_VECTOR_EMPTY)
8688     {
8689         VmaSuballocation& lastSuballoc = suballocations1st.back();
8690         if (lastSuballoc.offset == offset)
8691         {
8692             m_SumFreeSize += lastSuballoc.size;
8693             suballocations1st.pop_back();
8694             CleanupAfterFree();
8695             return;
8696         }
8697     }
8698 
8699     VmaSuballocation refSuballoc;
8700     refSuballoc.offset = offset;
8701     // Rest of members stays uninitialized intentionally for better performance.
8702 
8703     // Item from the middle of 1st vector.
8704     {
8705         const SuballocationVectorType::iterator it = VmaBinaryFindSorted(
8706             suballocations1st.begin() + m_1stNullItemsBeginCount,
8707             suballocations1st.end(),
8708             refSuballoc,
8709             VmaSuballocationOffsetLess());
8710         if (it != suballocations1st.end())
8711         {
8712             it->type = VMA_SUBALLOCATION_TYPE_FREE;
8713             it->userData = VMA_NULL;
8714             ++m_1stNullItemsMiddleCount;
8715             m_SumFreeSize += it->size;
8716             CleanupAfterFree();
8717             return;
8718         }
8719     }
8720 
8721     if (m_2ndVectorMode != SECOND_VECTOR_EMPTY)
8722     {
8723         // Item from the middle of 2nd vector.
8724         const SuballocationVectorType::iterator it = m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ?
8725             VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetLess()) :
8726             VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetGreater());
8727         if (it != suballocations2nd.end())
8728         {
8729             it->type = VMA_SUBALLOCATION_TYPE_FREE;
8730             it->userData = VMA_NULL;
8731             ++m_2ndNullItemsCount;
8732             m_SumFreeSize += it->size;
8733             CleanupAfterFree();
8734             return;
8735         }
8736     }
8737 
8738     VMA_ASSERT(0 && "Allocation to free not found in linear allocator!");
8739 }
8740 
GetAllocationInfo(VmaAllocHandle allocHandle,VmaVirtualAllocationInfo & outInfo)8741 void VmaBlockMetadata_Linear::GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo)
8742 {
8743     outInfo.offset = (VkDeviceSize)allocHandle - 1;
8744     VmaSuballocation& suballoc = FindSuballocation(outInfo.offset);
8745     outInfo.size = suballoc.size;
8746     outInfo.pUserData = suballoc.userData;
8747 }
8748 
GetAllocationUserData(VmaAllocHandle allocHandle)8749 void* VmaBlockMetadata_Linear::GetAllocationUserData(VmaAllocHandle allocHandle) const
8750 {
8751     return FindSuballocation((VkDeviceSize)allocHandle - 1).userData;
8752 }
8753 
GetAllocationListBegin()8754 VmaAllocHandle VmaBlockMetadata_Linear::GetAllocationListBegin() const
8755 {
8756     // Function only used for defragmentation, which is disabled for this algorithm
8757     VMA_ASSERT(0);
8758     return VK_NULL_HANDLE;
8759 }
8760 
GetNextAllocation(VmaAllocHandle prevAlloc)8761 VmaAllocHandle VmaBlockMetadata_Linear::GetNextAllocation(VmaAllocHandle prevAlloc) const
8762 {
8763     // Function only used for defragmentation, which is disabled for this algorithm
8764     VMA_ASSERT(0);
8765     return VK_NULL_HANDLE;
8766 }
8767 
GetNextFreeRegionSize(VmaAllocHandle alloc)8768 VkDeviceSize VmaBlockMetadata_Linear::GetNextFreeRegionSize(VmaAllocHandle alloc) const
8769 {
8770     // Function only used for defragmentation, which is disabled for this algorithm
8771     VMA_ASSERT(0);
8772     return 0;
8773 }
8774 
Clear()8775 void VmaBlockMetadata_Linear::Clear()
8776 {
8777     m_SumFreeSize = GetSize();
8778     m_Suballocations0.clear();
8779     m_Suballocations1.clear();
8780     // Leaving m_1stVectorIndex unchanged - it doesn't matter.
8781     m_2ndVectorMode = SECOND_VECTOR_EMPTY;
8782     m_1stNullItemsBeginCount = 0;
8783     m_1stNullItemsMiddleCount = 0;
8784     m_2ndNullItemsCount = 0;
8785 }
8786 
SetAllocationUserData(VmaAllocHandle allocHandle,void * userData)8787 void VmaBlockMetadata_Linear::SetAllocationUserData(VmaAllocHandle allocHandle, void* userData)
8788 {
8789     VmaSuballocation& suballoc = FindSuballocation((VkDeviceSize)allocHandle - 1);
8790     suballoc.userData = userData;
8791 }
8792 
DebugLogAllAllocations()8793 void VmaBlockMetadata_Linear::DebugLogAllAllocations() const
8794 {
8795     const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8796     for (auto it = suballocations1st.begin() + m_1stNullItemsBeginCount; it != suballocations1st.end(); ++it)
8797         if (it->type != VMA_SUBALLOCATION_TYPE_FREE)
8798             DebugLogAllocation(it->offset, it->size, it->userData);
8799 
8800     const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8801     for (auto it = suballocations2nd.begin(); it != suballocations2nd.end(); ++it)
8802         if (it->type != VMA_SUBALLOCATION_TYPE_FREE)
8803             DebugLogAllocation(it->offset, it->size, it->userData);
8804 }
8805 
FindSuballocation(VkDeviceSize offset)8806 VmaSuballocation& VmaBlockMetadata_Linear::FindSuballocation(VkDeviceSize offset) const
8807 {
8808     const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8809     const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8810 
8811     VmaSuballocation refSuballoc;
8812     refSuballoc.offset = offset;
8813     // Rest of members stays uninitialized intentionally for better performance.
8814 
8815     // Item from the 1st vector.
8816     {
8817         SuballocationVectorType::const_iterator it = VmaBinaryFindSorted(
8818             suballocations1st.begin() + m_1stNullItemsBeginCount,
8819             suballocations1st.end(),
8820             refSuballoc,
8821             VmaSuballocationOffsetLess());
8822         if (it != suballocations1st.end())
8823         {
8824             return const_cast<VmaSuballocation&>(*it);
8825         }
8826     }
8827 
8828     if (m_2ndVectorMode != SECOND_VECTOR_EMPTY)
8829     {
8830         // Rest of members stays uninitialized intentionally for better performance.
8831         SuballocationVectorType::const_iterator it = m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ?
8832             VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetLess()) :
8833             VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetGreater());
8834         if (it != suballocations2nd.end())
8835         {
8836             return const_cast<VmaSuballocation&>(*it);
8837         }
8838     }
8839 
8840     VMA_ASSERT(0 && "Allocation not found in linear allocator!");
8841     return const_cast<VmaSuballocation&>(suballocations1st.back()); // Should never occur.
8842 }
8843 
ShouldCompact1st()8844 bool VmaBlockMetadata_Linear::ShouldCompact1st() const
8845 {
8846     const size_t nullItemCount = m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount;
8847     const size_t suballocCount = AccessSuballocations1st().size();
8848     return suballocCount > 32 && nullItemCount * 2 >= (suballocCount - nullItemCount) * 3;
8849 }
8850 
CleanupAfterFree()8851 void VmaBlockMetadata_Linear::CleanupAfterFree()
8852 {
8853     SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8854     SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8855 
8856     if (IsEmpty())
8857     {
8858         suballocations1st.clear();
8859         suballocations2nd.clear();
8860         m_1stNullItemsBeginCount = 0;
8861         m_1stNullItemsMiddleCount = 0;
8862         m_2ndNullItemsCount = 0;
8863         m_2ndVectorMode = SECOND_VECTOR_EMPTY;
8864     }
8865     else
8866     {
8867         const size_t suballoc1stCount = suballocations1st.size();
8868         const size_t nullItem1stCount = m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount;
8869         VMA_ASSERT(nullItem1stCount <= suballoc1stCount);
8870 
8871         // Find more null items at the beginning of 1st vector.
8872         while (m_1stNullItemsBeginCount < suballoc1stCount &&
8873             suballocations1st[m_1stNullItemsBeginCount].type == VMA_SUBALLOCATION_TYPE_FREE)
8874         {
8875             ++m_1stNullItemsBeginCount;
8876             --m_1stNullItemsMiddleCount;
8877         }
8878 
8879         // Find more null items at the end of 1st vector.
8880         while (m_1stNullItemsMiddleCount > 0 &&
8881             suballocations1st.back().type == VMA_SUBALLOCATION_TYPE_FREE)
8882         {
8883             --m_1stNullItemsMiddleCount;
8884             suballocations1st.pop_back();
8885         }
8886 
8887         // Find more null items at the end of 2nd vector.
8888         while (m_2ndNullItemsCount > 0 &&
8889             suballocations2nd.back().type == VMA_SUBALLOCATION_TYPE_FREE)
8890         {
8891             --m_2ndNullItemsCount;
8892             suballocations2nd.pop_back();
8893         }
8894 
8895         // Find more null items at the beginning of 2nd vector.
8896         while (m_2ndNullItemsCount > 0 &&
8897             suballocations2nd[0].type == VMA_SUBALLOCATION_TYPE_FREE)
8898         {
8899             --m_2ndNullItemsCount;
8900             VmaVectorRemove(suballocations2nd, 0);
8901         }
8902 
8903         if (ShouldCompact1st())
8904         {
8905             const size_t nonNullItemCount = suballoc1stCount - nullItem1stCount;
8906             size_t srcIndex = m_1stNullItemsBeginCount;
8907             for (size_t dstIndex = 0; dstIndex < nonNullItemCount; ++dstIndex)
8908             {
8909                 while (suballocations1st[srcIndex].type == VMA_SUBALLOCATION_TYPE_FREE)
8910                 {
8911                     ++srcIndex;
8912                 }
8913                 if (dstIndex != srcIndex)
8914                 {
8915                     suballocations1st[dstIndex] = suballocations1st[srcIndex];
8916                 }
8917                 ++srcIndex;
8918             }
8919             suballocations1st.resize(nonNullItemCount);
8920             m_1stNullItemsBeginCount = 0;
8921             m_1stNullItemsMiddleCount = 0;
8922         }
8923 
8924         // 2nd vector became empty.
8925         if (suballocations2nd.empty())
8926         {
8927             m_2ndVectorMode = SECOND_VECTOR_EMPTY;
8928         }
8929 
8930         // 1st vector became empty.
8931         if (suballocations1st.size() - m_1stNullItemsBeginCount == 0)
8932         {
8933             suballocations1st.clear();
8934             m_1stNullItemsBeginCount = 0;
8935 
8936             if (!suballocations2nd.empty() && m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
8937             {
8938                 // Swap 1st with 2nd. Now 2nd is empty.
8939                 m_2ndVectorMode = SECOND_VECTOR_EMPTY;
8940                 m_1stNullItemsMiddleCount = m_2ndNullItemsCount;
8941                 while (m_1stNullItemsBeginCount < suballocations2nd.size() &&
8942                     suballocations2nd[m_1stNullItemsBeginCount].type == VMA_SUBALLOCATION_TYPE_FREE)
8943                 {
8944                     ++m_1stNullItemsBeginCount;
8945                     --m_1stNullItemsMiddleCount;
8946                 }
8947                 m_2ndNullItemsCount = 0;
8948                 m_1stVectorIndex ^= 1;
8949             }
8950         }
8951     }
8952 
8953     VMA_HEAVY_ASSERT(Validate());
8954 }
8955 
CreateAllocationRequest_LowerAddress(VkDeviceSize allocSize,VkDeviceSize allocAlignment,VmaSuballocationType allocType,uint32_t strategy,VmaAllocationRequest * pAllocationRequest)8956 bool VmaBlockMetadata_Linear::CreateAllocationRequest_LowerAddress(
8957     VkDeviceSize allocSize,
8958     VkDeviceSize allocAlignment,
8959     VmaSuballocationType allocType,
8960     uint32_t strategy,
8961     VmaAllocationRequest* pAllocationRequest)
8962 {
8963     const VkDeviceSize blockSize = GetSize();
8964     const VkDeviceSize debugMargin = GetDebugMargin();
8965     const VkDeviceSize bufferImageGranularity = GetBufferImageGranularity();
8966     SuballocationVectorType& suballocations1st = AccessSuballocations1st();
8967     SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
8968 
8969     if (m_2ndVectorMode == SECOND_VECTOR_EMPTY || m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
8970     {
8971         // Try to allocate at the end of 1st vector.
8972 
8973         VkDeviceSize resultBaseOffset = 0;
8974         if (!suballocations1st.empty())
8975         {
8976             const VmaSuballocation& lastSuballoc = suballocations1st.back();
8977             resultBaseOffset = lastSuballoc.offset + lastSuballoc.size + debugMargin;
8978         }
8979 
8980         // Start from offset equal to beginning of free space.
8981         VkDeviceSize resultOffset = resultBaseOffset;
8982 
8983         // Apply alignment.
8984         resultOffset = VmaAlignUp(resultOffset, allocAlignment);
8985 
8986         // Check previous suballocations for BufferImageGranularity conflicts.
8987         // Make bigger alignment if necessary.
8988         if (bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations1st.empty())
8989         {
8990             bool bufferImageGranularityConflict = false;
8991             for (size_t prevSuballocIndex = suballocations1st.size(); prevSuballocIndex--; )
8992             {
8993                 const VmaSuballocation& prevSuballoc = suballocations1st[prevSuballocIndex];
8994                 if (VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, resultOffset, bufferImageGranularity))
8995                 {
8996                     if (VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
8997                     {
8998                         bufferImageGranularityConflict = true;
8999                         break;
9000                     }
9001                 }
9002                 else
9003                     // Already on previous page.
9004                     break;
9005             }
9006             if (bufferImageGranularityConflict)
9007             {
9008                 resultOffset = VmaAlignUp(resultOffset, bufferImageGranularity);
9009             }
9010         }
9011 
9012         const VkDeviceSize freeSpaceEnd = m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ?
9013             suballocations2nd.back().offset : blockSize;
9014 
9015         // There is enough free space at the end after alignment.
9016         if (resultOffset + allocSize + debugMargin <= freeSpaceEnd)
9017         {
9018             // Check next suballocations for BufferImageGranularity conflicts.
9019             // If conflict exists, allocation cannot be made here.
9020             if ((allocSize % bufferImageGranularity || resultOffset % bufferImageGranularity) && m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
9021             {
9022                 for (size_t nextSuballocIndex = suballocations2nd.size(); nextSuballocIndex--; )
9023                 {
9024                     const VmaSuballocation& nextSuballoc = suballocations2nd[nextSuballocIndex];
9025                     if (VmaBlocksOnSamePage(resultOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
9026                     {
9027                         if (VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
9028                         {
9029                             return false;
9030                         }
9031                     }
9032                     else
9033                     {
9034                         // Already on previous page.
9035                         break;
9036                     }
9037                 }
9038             }
9039 
9040             // All tests passed: Success.
9041             pAllocationRequest->allocHandle = (VmaAllocHandle)(resultOffset + 1);
9042             // pAllocationRequest->item, customData unused.
9043             pAllocationRequest->type = VmaAllocationRequestType::EndOf1st;
9044             return true;
9045         }
9046     }
9047 
9048     // Wrap-around to end of 2nd vector. Try to allocate there, watching for the
9049     // beginning of 1st vector as the end of free space.
9050     if (m_2ndVectorMode == SECOND_VECTOR_EMPTY || m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
9051     {
9052         VMA_ASSERT(!suballocations1st.empty());
9053 
9054         VkDeviceSize resultBaseOffset = 0;
9055         if (!suballocations2nd.empty())
9056         {
9057             const VmaSuballocation& lastSuballoc = suballocations2nd.back();
9058             resultBaseOffset = lastSuballoc.offset + lastSuballoc.size + debugMargin;
9059         }
9060 
9061         // Start from offset equal to beginning of free space.
9062         VkDeviceSize resultOffset = resultBaseOffset;
9063 
9064         // Apply alignment.
9065         resultOffset = VmaAlignUp(resultOffset, allocAlignment);
9066 
9067         // Check previous suballocations for BufferImageGranularity conflicts.
9068         // Make bigger alignment if necessary.
9069         if (bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations2nd.empty())
9070         {
9071             bool bufferImageGranularityConflict = false;
9072             for (size_t prevSuballocIndex = suballocations2nd.size(); prevSuballocIndex--; )
9073             {
9074                 const VmaSuballocation& prevSuballoc = suballocations2nd[prevSuballocIndex];
9075                 if (VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, resultOffset, bufferImageGranularity))
9076                 {
9077                     if (VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
9078                     {
9079                         bufferImageGranularityConflict = true;
9080                         break;
9081                     }
9082                 }
9083                 else
9084                     // Already on previous page.
9085                     break;
9086             }
9087             if (bufferImageGranularityConflict)
9088             {
9089                 resultOffset = VmaAlignUp(resultOffset, bufferImageGranularity);
9090             }
9091         }
9092 
9093         size_t index1st = m_1stNullItemsBeginCount;
9094 
9095         // There is enough free space at the end after alignment.
9096         if ((index1st == suballocations1st.size() && resultOffset + allocSize + debugMargin <= blockSize) ||
9097             (index1st < suballocations1st.size() && resultOffset + allocSize + debugMargin <= suballocations1st[index1st].offset))
9098         {
9099             // Check next suballocations for BufferImageGranularity conflicts.
9100             // If conflict exists, allocation cannot be made here.
9101             if (allocSize % bufferImageGranularity || resultOffset % bufferImageGranularity)
9102             {
9103                 for (size_t nextSuballocIndex = index1st;
9104                     nextSuballocIndex < suballocations1st.size();
9105                     nextSuballocIndex++)
9106                 {
9107                     const VmaSuballocation& nextSuballoc = suballocations1st[nextSuballocIndex];
9108                     if (VmaBlocksOnSamePage(resultOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
9109                     {
9110                         if (VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
9111                         {
9112                             return false;
9113                         }
9114                     }
9115                     else
9116                     {
9117                         // Already on next page.
9118                         break;
9119                     }
9120                 }
9121             }
9122 
9123             // All tests passed: Success.
9124             pAllocationRequest->allocHandle = (VmaAllocHandle)(resultOffset + 1);
9125             pAllocationRequest->type = VmaAllocationRequestType::EndOf2nd;
9126             // pAllocationRequest->item, customData unused.
9127             return true;
9128         }
9129     }
9130 
9131     return false;
9132 }
9133 
CreateAllocationRequest_UpperAddress(VkDeviceSize allocSize,VkDeviceSize allocAlignment,VmaSuballocationType allocType,uint32_t strategy,VmaAllocationRequest * pAllocationRequest)9134 bool VmaBlockMetadata_Linear::CreateAllocationRequest_UpperAddress(
9135     VkDeviceSize allocSize,
9136     VkDeviceSize allocAlignment,
9137     VmaSuballocationType allocType,
9138     uint32_t strategy,
9139     VmaAllocationRequest* pAllocationRequest)
9140 {
9141     const VkDeviceSize blockSize = GetSize();
9142     const VkDeviceSize bufferImageGranularity = GetBufferImageGranularity();
9143     SuballocationVectorType& suballocations1st = AccessSuballocations1st();
9144     SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
9145 
9146     if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
9147     {
9148         VMA_ASSERT(0 && "Trying to use pool with linear algorithm as double stack, while it is already being used as ring buffer.");
9149         return false;
9150     }
9151 
9152     // Try to allocate before 2nd.back(), or end of block if 2nd.empty().
9153     if (allocSize > blockSize)
9154     {
9155         return false;
9156     }
9157     VkDeviceSize resultBaseOffset = blockSize - allocSize;
9158     if (!suballocations2nd.empty())
9159     {
9160         const VmaSuballocation& lastSuballoc = suballocations2nd.back();
9161         resultBaseOffset = lastSuballoc.offset - allocSize;
9162         if (allocSize > lastSuballoc.offset)
9163         {
9164             return false;
9165         }
9166     }
9167 
9168     // Start from offset equal to end of free space.
9169     VkDeviceSize resultOffset = resultBaseOffset;
9170 
9171     const VkDeviceSize debugMargin = GetDebugMargin();
9172 
9173     // Apply debugMargin at the end.
9174     if (debugMargin > 0)
9175     {
9176         if (resultOffset < debugMargin)
9177         {
9178             return false;
9179         }
9180         resultOffset -= debugMargin;
9181     }
9182 
9183     // Apply alignment.
9184     resultOffset = VmaAlignDown(resultOffset, allocAlignment);
9185 
9186     // Check next suballocations from 2nd for BufferImageGranularity conflicts.
9187     // Make bigger alignment if necessary.
9188     if (bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations2nd.empty())
9189     {
9190         bool bufferImageGranularityConflict = false;
9191         for (size_t nextSuballocIndex = suballocations2nd.size(); nextSuballocIndex--; )
9192         {
9193             const VmaSuballocation& nextSuballoc = suballocations2nd[nextSuballocIndex];
9194             if (VmaBlocksOnSamePage(resultOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
9195             {
9196                 if (VmaIsBufferImageGranularityConflict(nextSuballoc.type, allocType))
9197                 {
9198                     bufferImageGranularityConflict = true;
9199                     break;
9200                 }
9201             }
9202             else
9203                 // Already on previous page.
9204                 break;
9205         }
9206         if (bufferImageGranularityConflict)
9207         {
9208             resultOffset = VmaAlignDown(resultOffset, bufferImageGranularity);
9209         }
9210     }
9211 
9212     // There is enough free space.
9213     const VkDeviceSize endOf1st = !suballocations1st.empty() ?
9214         suballocations1st.back().offset + suballocations1st.back().size :
9215         0;
9216     if (endOf1st + debugMargin <= resultOffset)
9217     {
9218         // Check previous suballocations for BufferImageGranularity conflicts.
9219         // If conflict exists, allocation cannot be made here.
9220         if (bufferImageGranularity > 1)
9221         {
9222             for (size_t prevSuballocIndex = suballocations1st.size(); prevSuballocIndex--; )
9223             {
9224                 const VmaSuballocation& prevSuballoc = suballocations1st[prevSuballocIndex];
9225                 if (VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, resultOffset, bufferImageGranularity))
9226                 {
9227                     if (VmaIsBufferImageGranularityConflict(allocType, prevSuballoc.type))
9228                     {
9229                         return false;
9230                     }
9231                 }
9232                 else
9233                 {
9234                     // Already on next page.
9235                     break;
9236                 }
9237             }
9238         }
9239 
9240         // All tests passed: Success.
9241         pAllocationRequest->allocHandle = (VmaAllocHandle)(resultOffset + 1);
9242         // pAllocationRequest->item unused.
9243         pAllocationRequest->type = VmaAllocationRequestType::UpperAddress;
9244         return true;
9245     }
9246 
9247     return false;
9248 }
9249 #endif // _VMA_BLOCK_METADATA_LINEAR_FUNCTIONS
9250 #endif // _VMA_BLOCK_METADATA_LINEAR
9251 
9252 #if 0
9253 #ifndef _VMA_BLOCK_METADATA_BUDDY
9254 /*
9255 - GetSize() is the original size of allocated memory block.
9256 - m_UsableSize is this size aligned down to a power of two.
9257   All allocations and calculations happen relative to m_UsableSize.
9258 - GetUnusableSize() is the difference between them.
9259   It is reported as separate, unused range, not available for allocations.
9260 
9261 Node at level 0 has size = m_UsableSize.
9262 Each next level contains nodes with size 2 times smaller than current level.
9263 m_LevelCount is the maximum number of levels to use in the current object.
9264 */
9265 class VmaBlockMetadata_Buddy : public VmaBlockMetadata
9266 {
9267     VMA_CLASS_NO_COPY(VmaBlockMetadata_Buddy)
9268 public:
9269     VmaBlockMetadata_Buddy(const VkAllocationCallbacks* pAllocationCallbacks,
9270         VkDeviceSize bufferImageGranularity, bool isVirtual);
9271     virtual ~VmaBlockMetadata_Buddy();
9272 
9273     size_t GetAllocationCount() const override { return m_AllocationCount; }
9274     VkDeviceSize GetSumFreeSize() const override { return m_SumFreeSize + GetUnusableSize(); }
9275     bool IsEmpty() const override { return m_Root->type == Node::TYPE_FREE; }
9276     VkResult CheckCorruption(const void* pBlockData) override { return VK_ERROR_FEATURE_NOT_PRESENT; }
9277     VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const override { return (VkDeviceSize)allocHandle - 1; };
9278     void DebugLogAllAllocations() const override { DebugLogAllAllocationNode(m_Root, 0); }
9279 
9280     void Init(VkDeviceSize size) override;
9281     bool Validate() const override;
9282 
9283     void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const override;
9284     void AddStatistics(VmaStatistics& inoutStats) const override;
9285 
9286 #if VMA_STATS_STRING_ENABLED
9287     void PrintDetailedMap(class VmaJsonWriter& json, uint32_t mapRefCount) const override;
9288 #endif
9289 
9290     bool CreateAllocationRequest(
9291         VkDeviceSize allocSize,
9292         VkDeviceSize allocAlignment,
9293         bool upperAddress,
9294         VmaSuballocationType allocType,
9295         uint32_t strategy,
9296         VmaAllocationRequest* pAllocationRequest) override;
9297 
9298     void Alloc(
9299         const VmaAllocationRequest& request,
9300         VmaSuballocationType type,
9301         void* userData) override;
9302 
9303     void Free(VmaAllocHandle allocHandle) override;
9304     void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) override;
9305     void* GetAllocationUserData(VmaAllocHandle allocHandle) const override;
9306     VmaAllocHandle GetAllocationListBegin() const override;
9307     VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const override;
9308     void Clear() override;
9309     void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) override;
9310 
9311 private:
9312     static const size_t MAX_LEVELS = 48;
9313 
9314     struct ValidationContext
9315     {
9316         size_t calculatedAllocationCount = 0;
9317         size_t calculatedFreeCount = 0;
9318         VkDeviceSize calculatedSumFreeSize = 0;
9319     };
9320     struct Node
9321     {
9322         VkDeviceSize offset;
9323         enum TYPE
9324         {
9325             TYPE_FREE,
9326             TYPE_ALLOCATION,
9327             TYPE_SPLIT,
9328             TYPE_COUNT
9329         } type;
9330         Node* parent;
9331         Node* buddy;
9332 
9333         union
9334         {
9335             struct
9336             {
9337                 Node* prev;
9338                 Node* next;
9339             } free;
9340             struct
9341             {
9342                 void* userData;
9343             } allocation;
9344             struct
9345             {
9346                 Node* leftChild;
9347             } split;
9348         };
9349     };
9350 
9351     // Size of the memory block aligned down to a power of two.
9352     VkDeviceSize m_UsableSize;
9353     uint32_t m_LevelCount;
9354     VmaPoolAllocator<Node> m_NodeAllocator;
9355     Node* m_Root;
9356     struct
9357     {
9358         Node* front;
9359         Node* back;
9360     } m_FreeList[MAX_LEVELS];
9361 
9362     // Number of nodes in the tree with type == TYPE_ALLOCATION.
9363     size_t m_AllocationCount;
9364     // Number of nodes in the tree with type == TYPE_FREE.
9365     size_t m_FreeCount;
9366     // Doesn't include space wasted due to internal fragmentation - allocation sizes are just aligned up to node sizes.
9367     // Doesn't include unusable size.
9368     VkDeviceSize m_SumFreeSize;
9369 
9370     VkDeviceSize GetUnusableSize() const { return GetSize() - m_UsableSize; }
9371     VkDeviceSize LevelToNodeSize(uint32_t level) const { return m_UsableSize >> level; }
9372 
9373     VkDeviceSize AlignAllocationSize(VkDeviceSize size) const
9374     {
9375         if (!IsVirtual())
9376         {
9377             size = VmaAlignUp(size, (VkDeviceSize)16);
9378         }
9379         return VmaNextPow2(size);
9380     }
9381     Node* FindAllocationNode(VkDeviceSize offset, uint32_t& outLevel) const;
9382     void DeleteNodeChildren(Node* node);
9383     bool ValidateNode(ValidationContext& ctx, const Node* parent, const Node* curr, uint32_t level, VkDeviceSize levelNodeSize) const;
9384     uint32_t AllocSizeToLevel(VkDeviceSize allocSize) const;
9385     void AddNodeToDetailedStatistics(VmaDetailedStatistics& inoutStats, const Node* node, VkDeviceSize levelNodeSize) const;
9386     // Adds node to the front of FreeList at given level.
9387     // node->type must be FREE.
9388     // node->free.prev, next can be undefined.
9389     void AddToFreeListFront(uint32_t level, Node* node);
9390     // Removes node from FreeList at given level.
9391     // node->type must be FREE.
9392     // node->free.prev, next stay untouched.
9393     void RemoveFromFreeList(uint32_t level, Node* node);
9394     void DebugLogAllAllocationNode(Node* node, uint32_t level) const;
9395 
9396 #if VMA_STATS_STRING_ENABLED
9397     void PrintDetailedMapNode(class VmaJsonWriter& json, const Node* node, VkDeviceSize levelNodeSize) const;
9398 #endif
9399 };
9400 
9401 #ifndef _VMA_BLOCK_METADATA_BUDDY_FUNCTIONS
9402 VmaBlockMetadata_Buddy::VmaBlockMetadata_Buddy(const VkAllocationCallbacks* pAllocationCallbacks,
9403     VkDeviceSize bufferImageGranularity, bool isVirtual)
9404     : VmaBlockMetadata(pAllocationCallbacks, bufferImageGranularity, isVirtual),
9405     m_NodeAllocator(pAllocationCallbacks, 32), // firstBlockCapacity
9406     m_Root(VMA_NULL),
9407     m_AllocationCount(0),
9408     m_FreeCount(1),
9409     m_SumFreeSize(0)
9410 {
9411     memset(m_FreeList, 0, sizeof(m_FreeList));
9412 }
9413 
9414 VmaBlockMetadata_Buddy::~VmaBlockMetadata_Buddy()
9415 {
9416     DeleteNodeChildren(m_Root);
9417     m_NodeAllocator.Free(m_Root);
9418 }
9419 
9420 void VmaBlockMetadata_Buddy::Init(VkDeviceSize size)
9421 {
9422     VmaBlockMetadata::Init(size);
9423 
9424     m_UsableSize = VmaPrevPow2(size);
9425     m_SumFreeSize = m_UsableSize;
9426 
9427     // Calculate m_LevelCount.
9428     const VkDeviceSize minNodeSize = IsVirtual() ? 1 : 16;
9429     m_LevelCount = 1;
9430     while (m_LevelCount < MAX_LEVELS &&
9431         LevelToNodeSize(m_LevelCount) >= minNodeSize)
9432     {
9433         ++m_LevelCount;
9434     }
9435 
9436     Node* rootNode = m_NodeAllocator.Alloc();
9437     rootNode->offset = 0;
9438     rootNode->type = Node::TYPE_FREE;
9439     rootNode->parent = VMA_NULL;
9440     rootNode->buddy = VMA_NULL;
9441 
9442     m_Root = rootNode;
9443     AddToFreeListFront(0, rootNode);
9444 }
9445 
9446 bool VmaBlockMetadata_Buddy::Validate() const
9447 {
9448     // Validate tree.
9449     ValidationContext ctx;
9450     if (!ValidateNode(ctx, VMA_NULL, m_Root, 0, LevelToNodeSize(0)))
9451     {
9452         VMA_VALIDATE(false && "ValidateNode failed.");
9453     }
9454     VMA_VALIDATE(m_AllocationCount == ctx.calculatedAllocationCount);
9455     VMA_VALIDATE(m_SumFreeSize == ctx.calculatedSumFreeSize);
9456 
9457     // Validate free node lists.
9458     for (uint32_t level = 0; level < m_LevelCount; ++level)
9459     {
9460         VMA_VALIDATE(m_FreeList[level].front == VMA_NULL ||
9461             m_FreeList[level].front->free.prev == VMA_NULL);
9462 
9463         for (Node* node = m_FreeList[level].front;
9464             node != VMA_NULL;
9465             node = node->free.next)
9466         {
9467             VMA_VALIDATE(node->type == Node::TYPE_FREE);
9468 
9469             if (node->free.next == VMA_NULL)
9470             {
9471                 VMA_VALIDATE(m_FreeList[level].back == node);
9472             }
9473             else
9474             {
9475                 VMA_VALIDATE(node->free.next->free.prev == node);
9476             }
9477         }
9478     }
9479 
9480     // Validate that free lists ar higher levels are empty.
9481     for (uint32_t level = m_LevelCount; level < MAX_LEVELS; ++level)
9482     {
9483         VMA_VALIDATE(m_FreeList[level].front == VMA_NULL && m_FreeList[level].back == VMA_NULL);
9484     }
9485 
9486     return true;
9487 }
9488 
9489 void VmaBlockMetadata_Buddy::AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const
9490 {
9491     inoutStats.statistics.blockCount++;
9492     inoutStats.statistics.blockBytes += GetSize();
9493 
9494     AddNodeToDetailedStatistics(inoutStats, m_Root, LevelToNodeSize(0));
9495 
9496     const VkDeviceSize unusableSize = GetUnusableSize();
9497     if (unusableSize > 0)
9498         VmaAddDetailedStatisticsUnusedRange(inoutStats, unusableSize);
9499 }
9500 
9501 void VmaBlockMetadata_Buddy::AddStatistics(VmaStatistics& inoutStats) const
9502 {
9503     inoutStats.blockCount++;
9504     inoutStats.allocationCount += (uint32_t)m_AllocationCount;
9505     inoutStats.blockBytes += GetSize();
9506     inoutStats.allocationBytes += GetSize() - m_SumFreeSize;
9507 }
9508 
9509 #if VMA_STATS_STRING_ENABLED
9510 void VmaBlockMetadata_Buddy::PrintDetailedMap(class VmaJsonWriter& json, uint32_t mapRefCount) const
9511 {
9512     VmaDetailedStatistics stats;
9513     VmaClearDetailedStatistics(stats);
9514     AddDetailedStatistics(stats);
9515 
9516     PrintDetailedMap_Begin(
9517         json,
9518         stats.statistics.blockBytes - stats.statistics.allocationBytes,
9519         stats.statistics.allocationCount,
9520         stats.unusedRangeCount,
9521         mapRefCount);
9522 
9523     PrintDetailedMapNode(json, m_Root, LevelToNodeSize(0));
9524 
9525     const VkDeviceSize unusableSize = GetUnusableSize();
9526     if (unusableSize > 0)
9527     {
9528         PrintDetailedMap_UnusedRange(json,
9529             m_UsableSize, // offset
9530             unusableSize); // size
9531     }
9532 
9533     PrintDetailedMap_End(json);
9534 }
9535 #endif // VMA_STATS_STRING_ENABLED
9536 
9537 bool VmaBlockMetadata_Buddy::CreateAllocationRequest(
9538     VkDeviceSize allocSize,
9539     VkDeviceSize allocAlignment,
9540     bool upperAddress,
9541     VmaSuballocationType allocType,
9542     uint32_t strategy,
9543     VmaAllocationRequest* pAllocationRequest)
9544 {
9545     VMA_ASSERT(!upperAddress && "VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT can be used only with linear algorithm.");
9546 
9547     allocSize = AlignAllocationSize(allocSize);
9548 
9549     // Simple way to respect bufferImageGranularity. May be optimized some day.
9550     // Whenever it might be an OPTIMAL image...
9551     if (allocType == VMA_SUBALLOCATION_TYPE_UNKNOWN ||
9552         allocType == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
9553         allocType == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL)
9554     {
9555         allocAlignment = VMA_MAX(allocAlignment, GetBufferImageGranularity());
9556         allocSize = VmaAlignUp(allocSize, GetBufferImageGranularity());
9557     }
9558 
9559     if (allocSize > m_UsableSize)
9560     {
9561         return false;
9562     }
9563 
9564     const uint32_t targetLevel = AllocSizeToLevel(allocSize);
9565     for (uint32_t level = targetLevel; level--; )
9566     {
9567         for (Node* freeNode = m_FreeList[level].front;
9568             freeNode != VMA_NULL;
9569             freeNode = freeNode->free.next)
9570         {
9571             if (freeNode->offset % allocAlignment == 0)
9572             {
9573                 pAllocationRequest->type = VmaAllocationRequestType::Normal;
9574                 pAllocationRequest->allocHandle = (VmaAllocHandle)(freeNode->offset + 1);
9575                 pAllocationRequest->size = allocSize;
9576                 pAllocationRequest->customData = (void*)(uintptr_t)level;
9577                 return true;
9578             }
9579         }
9580     }
9581 
9582     return false;
9583 }
9584 
9585 void VmaBlockMetadata_Buddy::Alloc(
9586     const VmaAllocationRequest& request,
9587     VmaSuballocationType type,
9588     void* userData)
9589 {
9590     VMA_ASSERT(request.type == VmaAllocationRequestType::Normal);
9591 
9592     const uint32_t targetLevel = AllocSizeToLevel(request.size);
9593     uint32_t currLevel = (uint32_t)(uintptr_t)request.customData;
9594 
9595     Node* currNode = m_FreeList[currLevel].front;
9596     VMA_ASSERT(currNode != VMA_NULL && currNode->type == Node::TYPE_FREE);
9597     const VkDeviceSize offset = (VkDeviceSize)request.allocHandle - 1;
9598     while (currNode->offset != offset)
9599     {
9600         currNode = currNode->free.next;
9601         VMA_ASSERT(currNode != VMA_NULL && currNode->type == Node::TYPE_FREE);
9602     }
9603 
9604     // Go down, splitting free nodes.
9605     while (currLevel < targetLevel)
9606     {
9607         // currNode is already first free node at currLevel.
9608         // Remove it from list of free nodes at this currLevel.
9609         RemoveFromFreeList(currLevel, currNode);
9610 
9611         const uint32_t childrenLevel = currLevel + 1;
9612 
9613         // Create two free sub-nodes.
9614         Node* leftChild = m_NodeAllocator.Alloc();
9615         Node* rightChild = m_NodeAllocator.Alloc();
9616 
9617         leftChild->offset = currNode->offset;
9618         leftChild->type = Node::TYPE_FREE;
9619         leftChild->parent = currNode;
9620         leftChild->buddy = rightChild;
9621 
9622         rightChild->offset = currNode->offset + LevelToNodeSize(childrenLevel);
9623         rightChild->type = Node::TYPE_FREE;
9624         rightChild->parent = currNode;
9625         rightChild->buddy = leftChild;
9626 
9627         // Convert current currNode to split type.
9628         currNode->type = Node::TYPE_SPLIT;
9629         currNode->split.leftChild = leftChild;
9630 
9631         // Add child nodes to free list. Order is important!
9632         AddToFreeListFront(childrenLevel, rightChild);
9633         AddToFreeListFront(childrenLevel, leftChild);
9634 
9635         ++m_FreeCount;
9636         ++currLevel;
9637         currNode = m_FreeList[currLevel].front;
9638 
9639         /*
9640         We can be sure that currNode, as left child of node previously split,
9641         also fulfills the alignment requirement.
9642         */
9643     }
9644 
9645     // Remove from free list.
9646     VMA_ASSERT(currLevel == targetLevel &&
9647         currNode != VMA_NULL &&
9648         currNode->type == Node::TYPE_FREE);
9649     RemoveFromFreeList(currLevel, currNode);
9650 
9651     // Convert to allocation node.
9652     currNode->type = Node::TYPE_ALLOCATION;
9653     currNode->allocation.userData = userData;
9654 
9655     ++m_AllocationCount;
9656     --m_FreeCount;
9657     m_SumFreeSize -= request.size;
9658 }
9659 
9660 void VmaBlockMetadata_Buddy::GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo)
9661 {
9662     uint32_t level = 0;
9663     outInfo.offset = (VkDeviceSize)allocHandle - 1;
9664     const Node* const node = FindAllocationNode(outInfo.offset, level);
9665     outInfo.size = LevelToNodeSize(level);
9666     outInfo.pUserData = node->allocation.userData;
9667 }
9668 
9669 void* VmaBlockMetadata_Buddy::GetAllocationUserData(VmaAllocHandle allocHandle) const
9670 {
9671     uint32_t level = 0;
9672     const Node* const node = FindAllocationNode((VkDeviceSize)allocHandle - 1, level);
9673     return node->allocation.userData;
9674 }
9675 
9676 VmaAllocHandle VmaBlockMetadata_Buddy::GetAllocationListBegin() const
9677 {
9678     // Function only used for defragmentation, which is disabled for this algorithm
9679     return VK_NULL_HANDLE;
9680 }
9681 
9682 VmaAllocHandle VmaBlockMetadata_Buddy::GetNextAllocation(VmaAllocHandle prevAlloc) const
9683 {
9684     // Function only used for defragmentation, which is disabled for this algorithm
9685     return VK_NULL_HANDLE;
9686 }
9687 
9688 void VmaBlockMetadata_Buddy::DeleteNodeChildren(Node* node)
9689 {
9690     if (node->type == Node::TYPE_SPLIT)
9691     {
9692         DeleteNodeChildren(node->split.leftChild->buddy);
9693         DeleteNodeChildren(node->split.leftChild);
9694         const VkAllocationCallbacks* allocationCallbacks = GetAllocationCallbacks();
9695         m_NodeAllocator.Free(node->split.leftChild->buddy);
9696         m_NodeAllocator.Free(node->split.leftChild);
9697     }
9698 }
9699 
9700 void VmaBlockMetadata_Buddy::Clear()
9701 {
9702     DeleteNodeChildren(m_Root);
9703     m_Root->type = Node::TYPE_FREE;
9704     m_AllocationCount = 0;
9705     m_FreeCount = 1;
9706     m_SumFreeSize = m_UsableSize;
9707 }
9708 
9709 void VmaBlockMetadata_Buddy::SetAllocationUserData(VmaAllocHandle allocHandle, void* userData)
9710 {
9711     uint32_t level = 0;
9712     Node* const node = FindAllocationNode((VkDeviceSize)allocHandle - 1, level);
9713     node->allocation.userData = userData;
9714 }
9715 
9716 VmaBlockMetadata_Buddy::Node* VmaBlockMetadata_Buddy::FindAllocationNode(VkDeviceSize offset, uint32_t& outLevel) const
9717 {
9718     Node* node = m_Root;
9719     VkDeviceSize nodeOffset = 0;
9720     outLevel = 0;
9721     VkDeviceSize levelNodeSize = LevelToNodeSize(0);
9722     while (node->type == Node::TYPE_SPLIT)
9723     {
9724         const VkDeviceSize nextLevelNodeSize = levelNodeSize >> 1;
9725         if (offset < nodeOffset + nextLevelNodeSize)
9726         {
9727             node = node->split.leftChild;
9728         }
9729         else
9730         {
9731             node = node->split.leftChild->buddy;
9732             nodeOffset += nextLevelNodeSize;
9733         }
9734         ++outLevel;
9735         levelNodeSize = nextLevelNodeSize;
9736     }
9737 
9738     VMA_ASSERT(node != VMA_NULL && node->type == Node::TYPE_ALLOCATION);
9739     return node;
9740 }
9741 
9742 bool VmaBlockMetadata_Buddy::ValidateNode(ValidationContext& ctx, const Node* parent, const Node* curr, uint32_t level, VkDeviceSize levelNodeSize) const
9743 {
9744     VMA_VALIDATE(level < m_LevelCount);
9745     VMA_VALIDATE(curr->parent == parent);
9746     VMA_VALIDATE((curr->buddy == VMA_NULL) == (parent == VMA_NULL));
9747     VMA_VALIDATE(curr->buddy == VMA_NULL || curr->buddy->buddy == curr);
9748     switch (curr->type)
9749     {
9750     case Node::TYPE_FREE:
9751         // curr->free.prev, next are validated separately.
9752         ctx.calculatedSumFreeSize += levelNodeSize;
9753         ++ctx.calculatedFreeCount;
9754         break;
9755     case Node::TYPE_ALLOCATION:
9756         ++ctx.calculatedAllocationCount;
9757         if (!IsVirtual())
9758         {
9759             VMA_VALIDATE(curr->allocation.userData != VMA_NULL);
9760         }
9761         break;
9762     case Node::TYPE_SPLIT:
9763     {
9764         const uint32_t childrenLevel = level + 1;
9765         const VkDeviceSize childrenLevelNodeSize = levelNodeSize >> 1;
9766         const Node* const leftChild = curr->split.leftChild;
9767         VMA_VALIDATE(leftChild != VMA_NULL);
9768         VMA_VALIDATE(leftChild->offset == curr->offset);
9769         if (!ValidateNode(ctx, curr, leftChild, childrenLevel, childrenLevelNodeSize))
9770         {
9771             VMA_VALIDATE(false && "ValidateNode for left child failed.");
9772         }
9773         const Node* const rightChild = leftChild->buddy;
9774         VMA_VALIDATE(rightChild->offset == curr->offset + childrenLevelNodeSize);
9775         if (!ValidateNode(ctx, curr, rightChild, childrenLevel, childrenLevelNodeSize))
9776         {
9777             VMA_VALIDATE(false && "ValidateNode for right child failed.");
9778         }
9779     }
9780     break;
9781     default:
9782         return false;
9783     }
9784 
9785     return true;
9786 }
9787 
9788 uint32_t VmaBlockMetadata_Buddy::AllocSizeToLevel(VkDeviceSize allocSize) const
9789 {
9790     // I know this could be optimized somehow e.g. by using std::log2p1 from C++20.
9791     uint32_t level = 0;
9792     VkDeviceSize currLevelNodeSize = m_UsableSize;
9793     VkDeviceSize nextLevelNodeSize = currLevelNodeSize >> 1;
9794     while (allocSize <= nextLevelNodeSize && level + 1 < m_LevelCount)
9795     {
9796         ++level;
9797         currLevelNodeSize >>= 1;
9798         nextLevelNodeSize >>= 1;
9799     }
9800     return level;
9801 }
9802 
9803 void VmaBlockMetadata_Buddy::Free(VmaAllocHandle allocHandle)
9804 {
9805     uint32_t level = 0;
9806     Node* node = FindAllocationNode((VkDeviceSize)allocHandle - 1, level);
9807 
9808     ++m_FreeCount;
9809     --m_AllocationCount;
9810     m_SumFreeSize += LevelToNodeSize(level);
9811 
9812     node->type = Node::TYPE_FREE;
9813 
9814     // Join free nodes if possible.
9815     while (level > 0 && node->buddy->type == Node::TYPE_FREE)
9816     {
9817         RemoveFromFreeList(level, node->buddy);
9818         Node* const parent = node->parent;
9819 
9820         m_NodeAllocator.Free(node->buddy);
9821         m_NodeAllocator.Free(node);
9822         parent->type = Node::TYPE_FREE;
9823 
9824         node = parent;
9825         --level;
9826         --m_FreeCount;
9827     }
9828 
9829     AddToFreeListFront(level, node);
9830 }
9831 
9832 void VmaBlockMetadata_Buddy::AddNodeToDetailedStatistics(VmaDetailedStatistics& inoutStats, const Node* node, VkDeviceSize levelNodeSize) const
9833 {
9834     switch (node->type)
9835     {
9836     case Node::TYPE_FREE:
9837         VmaAddDetailedStatisticsUnusedRange(inoutStats, levelNodeSize);
9838         break;
9839     case Node::TYPE_ALLOCATION:
9840         VmaAddDetailedStatisticsAllocation(inoutStats, levelNodeSize);
9841         break;
9842     case Node::TYPE_SPLIT:
9843     {
9844         const VkDeviceSize childrenNodeSize = levelNodeSize / 2;
9845         const Node* const leftChild = node->split.leftChild;
9846         AddNodeToDetailedStatistics(inoutStats, leftChild, childrenNodeSize);
9847         const Node* const rightChild = leftChild->buddy;
9848         AddNodeToDetailedStatistics(inoutStats, rightChild, childrenNodeSize);
9849     }
9850     break;
9851     default:
9852         VMA_ASSERT(0);
9853     }
9854 }
9855 
9856 void VmaBlockMetadata_Buddy::AddToFreeListFront(uint32_t level, Node* node)
9857 {
9858     VMA_ASSERT(node->type == Node::TYPE_FREE);
9859 
9860     // List is empty.
9861     Node* const frontNode = m_FreeList[level].front;
9862     if (frontNode == VMA_NULL)
9863     {
9864         VMA_ASSERT(m_FreeList[level].back == VMA_NULL);
9865         node->free.prev = node->free.next = VMA_NULL;
9866         m_FreeList[level].front = m_FreeList[level].back = node;
9867     }
9868     else
9869     {
9870         VMA_ASSERT(frontNode->free.prev == VMA_NULL);
9871         node->free.prev = VMA_NULL;
9872         node->free.next = frontNode;
9873         frontNode->free.prev = node;
9874         m_FreeList[level].front = node;
9875     }
9876 }
9877 
9878 void VmaBlockMetadata_Buddy::RemoveFromFreeList(uint32_t level, Node* node)
9879 {
9880     VMA_ASSERT(m_FreeList[level].front != VMA_NULL);
9881 
9882     // It is at the front.
9883     if (node->free.prev == VMA_NULL)
9884     {
9885         VMA_ASSERT(m_FreeList[level].front == node);
9886         m_FreeList[level].front = node->free.next;
9887     }
9888     else
9889     {
9890         Node* const prevFreeNode = node->free.prev;
9891         VMA_ASSERT(prevFreeNode->free.next == node);
9892         prevFreeNode->free.next = node->free.next;
9893     }
9894 
9895     // It is at the back.
9896     if (node->free.next == VMA_NULL)
9897     {
9898         VMA_ASSERT(m_FreeList[level].back == node);
9899         m_FreeList[level].back = node->free.prev;
9900     }
9901     else
9902     {
9903         Node* const nextFreeNode = node->free.next;
9904         VMA_ASSERT(nextFreeNode->free.prev == node);
9905         nextFreeNode->free.prev = node->free.prev;
9906     }
9907 }
9908 
9909 void VmaBlockMetadata_Buddy::DebugLogAllAllocationNode(Node* node, uint32_t level) const
9910 {
9911     switch (node->type)
9912     {
9913     case Node::TYPE_FREE:
9914         break;
9915     case Node::TYPE_ALLOCATION:
9916         DebugLogAllocation(node->offset, LevelToNodeSize(level), node->allocation.userData);
9917         break;
9918     case Node::TYPE_SPLIT:
9919     {
9920         ++level;
9921         DebugLogAllAllocationNode(node->split.leftChild, level);
9922         DebugLogAllAllocationNode(node->split.leftChild->buddy, level);
9923     }
9924     break;
9925     default:
9926         VMA_ASSERT(0);
9927     }
9928 }
9929 
9930 #if VMA_STATS_STRING_ENABLED
9931 void VmaBlockMetadata_Buddy::PrintDetailedMapNode(class VmaJsonWriter& json, const Node* node, VkDeviceSize levelNodeSize) const
9932 {
9933     switch (node->type)
9934     {
9935     case Node::TYPE_FREE:
9936         PrintDetailedMap_UnusedRange(json, node->offset, levelNodeSize);
9937         break;
9938     case Node::TYPE_ALLOCATION:
9939         PrintDetailedMap_Allocation(json, node->offset, levelNodeSize, node->allocation.userData);
9940         break;
9941     case Node::TYPE_SPLIT:
9942     {
9943         const VkDeviceSize childrenNodeSize = levelNodeSize / 2;
9944         const Node* const leftChild = node->split.leftChild;
9945         PrintDetailedMapNode(json, leftChild, childrenNodeSize);
9946         const Node* const rightChild = leftChild->buddy;
9947         PrintDetailedMapNode(json, rightChild, childrenNodeSize);
9948     }
9949     break;
9950     default:
9951         VMA_ASSERT(0);
9952     }
9953 }
9954 #endif // VMA_STATS_STRING_ENABLED
9955 #endif // _VMA_BLOCK_METADATA_BUDDY_FUNCTIONS
9956 #endif // _VMA_BLOCK_METADATA_BUDDY
9957 #endif // #if 0
9958 
9959 #ifndef _VMA_BLOCK_METADATA_TLSF
9960 // To not search current larger region if first allocation won't succeed and skip to smaller range
9961 // use with VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT as strategy in CreateAllocationRequest().
9962 // When fragmentation and reusal of previous blocks doesn't matter then use with
9963 // VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT for fastest alloc time possible.
9964 class VmaBlockMetadata_TLSF : public VmaBlockMetadata
9965 {
9966     VMA_CLASS_NO_COPY(VmaBlockMetadata_TLSF)
9967 public:
9968     VmaBlockMetadata_TLSF(const VkAllocationCallbacks* pAllocationCallbacks,
9969         VkDeviceSize bufferImageGranularity, bool isVirtual);
9970     virtual ~VmaBlockMetadata_TLSF();
9971 
GetAllocationCount()9972     size_t GetAllocationCount() const override { return m_AllocCount; }
GetFreeRegionsCount()9973     size_t GetFreeRegionsCount() const override { return m_BlocksFreeCount + 1; }
GetSumFreeSize()9974     VkDeviceSize GetSumFreeSize() const override { return m_BlocksFreeSize + m_NullBlock->size; }
IsEmpty()9975     bool IsEmpty() const override { return m_NullBlock->offset == 0; }
GetAllocationOffset(VmaAllocHandle allocHandle)9976     VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const override { return ((Block*)allocHandle)->offset; };
9977 
9978     void Init(VkDeviceSize size) override;
9979     bool Validate() const override;
9980 
9981     void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const override;
9982     void AddStatistics(VmaStatistics& inoutStats) const override;
9983 
9984 #if VMA_STATS_STRING_ENABLED
9985     void PrintDetailedMap(class VmaJsonWriter& json) const override;
9986 #endif
9987 
9988     bool CreateAllocationRequest(
9989         VkDeviceSize allocSize,
9990         VkDeviceSize allocAlignment,
9991         bool upperAddress,
9992         VmaSuballocationType allocType,
9993         uint32_t strategy,
9994         VmaAllocationRequest* pAllocationRequest) override;
9995 
9996     VkResult CheckCorruption(const void* pBlockData) override;
9997     void Alloc(
9998         const VmaAllocationRequest& request,
9999         VmaSuballocationType type,
10000         void* userData) override;
10001 
10002     void Free(VmaAllocHandle allocHandle) override;
10003     void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) override;
10004     void* GetAllocationUserData(VmaAllocHandle allocHandle) const override;
10005     VmaAllocHandle GetAllocationListBegin() const override;
10006     VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const override;
10007     VkDeviceSize GetNextFreeRegionSize(VmaAllocHandle alloc) const override;
10008     void Clear() override;
10009     void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) override;
10010     void DebugLogAllAllocations() const override;
10011 
10012 private:
10013     // According to original paper it should be preferable 4 or 5:
10014     // M. Masmano, I. Ripoll, A. Crespo, and J. Real "TLSF: a New Dynamic Memory Allocator for Real-Time Systems"
10015     // http://www.gii.upv.es/tlsf/files/ecrts04_tlsf.pdf
10016     static const uint8_t SECOND_LEVEL_INDEX = 5;
10017     static const uint16_t SMALL_BUFFER_SIZE = 256;
10018     static const uint32_t INITIAL_BLOCK_ALLOC_COUNT = 16;
10019     static const uint8_t MEMORY_CLASS_SHIFT = 7;
10020     static const uint8_t MAX_MEMORY_CLASSES = 65 - MEMORY_CLASS_SHIFT;
10021 
10022     class Block
10023     {
10024     public:
10025         VkDeviceSize offset;
10026         VkDeviceSize size;
10027         Block* prevPhysical;
10028         Block* nextPhysical;
10029 
MarkFree()10030         void MarkFree() { prevFree = VMA_NULL; }
MarkTaken()10031         void MarkTaken() { prevFree = this; }
IsFree()10032         bool IsFree() const { return prevFree != this; }
UserData()10033         void*& UserData() { VMA_HEAVY_ASSERT(!IsFree()); return userData; }
PrevFree()10034         Block*& PrevFree() { return prevFree; }
NextFree()10035         Block*& NextFree() { VMA_HEAVY_ASSERT(IsFree()); return nextFree; }
10036 
10037     private:
10038         Block* prevFree; // Address of the same block here indicates that block is taken
10039         union
10040         {
10041             Block* nextFree;
10042             void* userData;
10043         };
10044     };
10045 
10046     size_t m_AllocCount;
10047     // Total number of free blocks besides null block
10048     size_t m_BlocksFreeCount;
10049     // Total size of free blocks excluding null block
10050     VkDeviceSize m_BlocksFreeSize;
10051     uint32_t m_IsFreeBitmap;
10052     uint8_t m_MemoryClasses;
10053     uint32_t m_InnerIsFreeBitmap[MAX_MEMORY_CLASSES];
10054     uint32_t m_ListsCount;
10055     /*
10056     * 0: 0-3 lists for small buffers
10057     * 1+: 0-(2^SLI-1) lists for normal buffers
10058     */
10059     Block** m_FreeList;
10060     VmaPoolAllocator<Block> m_BlockAllocator;
10061     Block* m_NullBlock;
10062     VmaBlockBufferImageGranularity m_GranularityHandler;
10063 
10064     uint8_t SizeToMemoryClass(VkDeviceSize size) const;
10065     uint16_t SizeToSecondIndex(VkDeviceSize size, uint8_t memoryClass) const;
10066     uint32_t GetListIndex(uint8_t memoryClass, uint16_t secondIndex) const;
10067     uint32_t GetListIndex(VkDeviceSize size) const;
10068 
10069     void RemoveFreeBlock(Block* block);
10070     void InsertFreeBlock(Block* block);
10071     void MergeBlock(Block* block, Block* prev);
10072 
10073     Block* FindFreeBlock(VkDeviceSize size, uint32_t& listIndex) const;
10074     bool CheckBlock(
10075         Block& block,
10076         uint32_t listIndex,
10077         VkDeviceSize allocSize,
10078         VkDeviceSize allocAlignment,
10079         VmaSuballocationType allocType,
10080         VmaAllocationRequest* pAllocationRequest);
10081 };
10082 
10083 #ifndef _VMA_BLOCK_METADATA_TLSF_FUNCTIONS
VmaBlockMetadata_TLSF(const VkAllocationCallbacks * pAllocationCallbacks,VkDeviceSize bufferImageGranularity,bool isVirtual)10084 VmaBlockMetadata_TLSF::VmaBlockMetadata_TLSF(const VkAllocationCallbacks* pAllocationCallbacks,
10085     VkDeviceSize bufferImageGranularity, bool isVirtual)
10086     : VmaBlockMetadata(pAllocationCallbacks, bufferImageGranularity, isVirtual),
10087     m_AllocCount(0),
10088     m_BlocksFreeCount(0),
10089     m_BlocksFreeSize(0),
10090     m_IsFreeBitmap(0),
10091     m_MemoryClasses(0),
10092     m_ListsCount(0),
10093     m_FreeList(VMA_NULL),
10094     m_BlockAllocator(pAllocationCallbacks, INITIAL_BLOCK_ALLOC_COUNT),
10095     m_NullBlock(VMA_NULL),
10096     m_GranularityHandler(bufferImageGranularity) {}
10097 
~VmaBlockMetadata_TLSF()10098 VmaBlockMetadata_TLSF::~VmaBlockMetadata_TLSF()
10099 {
10100     if (m_FreeList)
10101         vma_delete_array(GetAllocationCallbacks(), m_FreeList, m_ListsCount);
10102     m_GranularityHandler.Destroy(GetAllocationCallbacks());
10103 }
10104 
Init(VkDeviceSize size)10105 void VmaBlockMetadata_TLSF::Init(VkDeviceSize size)
10106 {
10107     VmaBlockMetadata::Init(size);
10108 
10109     if (!IsVirtual())
10110         m_GranularityHandler.Init(GetAllocationCallbacks(), size);
10111 
10112     m_NullBlock = m_BlockAllocator.Alloc();
10113     m_NullBlock->size = size;
10114     m_NullBlock->offset = 0;
10115     m_NullBlock->prevPhysical = VMA_NULL;
10116     m_NullBlock->nextPhysical = VMA_NULL;
10117     m_NullBlock->MarkFree();
10118     m_NullBlock->NextFree() = VMA_NULL;
10119     m_NullBlock->PrevFree() = VMA_NULL;
10120     uint8_t memoryClass = SizeToMemoryClass(size);
10121     uint16_t sli = SizeToSecondIndex(size, memoryClass);
10122     m_ListsCount = (memoryClass == 0 ? 0 : (memoryClass - 1) * (1UL << SECOND_LEVEL_INDEX) + sli) + 1;
10123     if (IsVirtual())
10124         m_ListsCount += 1UL << SECOND_LEVEL_INDEX;
10125     else
10126         m_ListsCount += 4;
10127 
10128     m_MemoryClasses = memoryClass + 2;
10129     memset(m_InnerIsFreeBitmap, 0, MAX_MEMORY_CLASSES * sizeof(uint32_t));
10130 
10131     m_FreeList = vma_new_array(GetAllocationCallbacks(), Block*, m_ListsCount);
10132     memset(m_FreeList, 0, m_ListsCount * sizeof(Block*));
10133 }
10134 
Validate()10135 bool VmaBlockMetadata_TLSF::Validate() const
10136 {
10137     VMA_VALIDATE(GetSumFreeSize() <= GetSize());
10138 
10139     VkDeviceSize calculatedSize = m_NullBlock->size;
10140     VkDeviceSize calculatedFreeSize = m_NullBlock->size;
10141     size_t allocCount = 0;
10142     size_t freeCount = 0;
10143 
10144     // Check integrity of free lists
10145     for (uint32_t list = 0; list < m_ListsCount; ++list)
10146     {
10147         Block* block = m_FreeList[list];
10148         if (block != VMA_NULL)
10149         {
10150             VMA_VALIDATE(block->IsFree());
10151             VMA_VALIDATE(block->PrevFree() == VMA_NULL);
10152             while (block->NextFree())
10153             {
10154                 VMA_VALIDATE(block->NextFree()->IsFree());
10155                 VMA_VALIDATE(block->NextFree()->PrevFree() == block);
10156                 block = block->NextFree();
10157             }
10158         }
10159     }
10160 
10161     VkDeviceSize nextOffset = m_NullBlock->offset;
10162     auto validateCtx = m_GranularityHandler.StartValidation(GetAllocationCallbacks(), IsVirtual());
10163 
10164     VMA_VALIDATE(m_NullBlock->nextPhysical == VMA_NULL);
10165     if (m_NullBlock->prevPhysical)
10166     {
10167         VMA_VALIDATE(m_NullBlock->prevPhysical->nextPhysical == m_NullBlock);
10168     }
10169     // Check all blocks
10170     for (Block* prev = m_NullBlock->prevPhysical; prev != VMA_NULL; prev = prev->prevPhysical)
10171     {
10172         VMA_VALIDATE(prev->offset + prev->size == nextOffset);
10173         nextOffset = prev->offset;
10174         calculatedSize += prev->size;
10175 
10176         uint32_t listIndex = GetListIndex(prev->size);
10177         if (prev->IsFree())
10178         {
10179             ++freeCount;
10180             // Check if free block belongs to free list
10181             Block* freeBlock = m_FreeList[listIndex];
10182             VMA_VALIDATE(freeBlock != VMA_NULL);
10183 
10184             bool found = false;
10185             do
10186             {
10187                 if (freeBlock == prev)
10188                     found = true;
10189 
10190                 freeBlock = freeBlock->NextFree();
10191             } while (!found && freeBlock != VMA_NULL);
10192 
10193             VMA_VALIDATE(found);
10194             calculatedFreeSize += prev->size;
10195         }
10196         else
10197         {
10198             ++allocCount;
10199             // Check if taken block is not on a free list
10200             Block* freeBlock = m_FreeList[listIndex];
10201             while (freeBlock)
10202             {
10203                 VMA_VALIDATE(freeBlock != prev);
10204                 freeBlock = freeBlock->NextFree();
10205             }
10206 
10207             if (!IsVirtual())
10208             {
10209                 VMA_VALIDATE(m_GranularityHandler.Validate(validateCtx, prev->offset, prev->size));
10210             }
10211         }
10212 
10213         if (prev->prevPhysical)
10214         {
10215             VMA_VALIDATE(prev->prevPhysical->nextPhysical == prev);
10216         }
10217     }
10218 
10219     if (!IsVirtual())
10220     {
10221         VMA_VALIDATE(m_GranularityHandler.FinishValidation(validateCtx));
10222     }
10223 
10224     VMA_VALIDATE(nextOffset == 0);
10225     VMA_VALIDATE(calculatedSize == GetSize());
10226     VMA_VALIDATE(calculatedFreeSize == GetSumFreeSize());
10227     VMA_VALIDATE(allocCount == m_AllocCount);
10228     VMA_VALIDATE(freeCount == m_BlocksFreeCount);
10229 
10230     return true;
10231 }
10232 
AddDetailedStatistics(VmaDetailedStatistics & inoutStats)10233 void VmaBlockMetadata_TLSF::AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const
10234 {
10235     inoutStats.statistics.blockCount++;
10236     inoutStats.statistics.blockBytes += GetSize();
10237     if (m_NullBlock->size > 0)
10238         VmaAddDetailedStatisticsUnusedRange(inoutStats, m_NullBlock->size);
10239 
10240     for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
10241     {
10242         if (block->IsFree())
10243             VmaAddDetailedStatisticsUnusedRange(inoutStats, block->size);
10244         else
10245             VmaAddDetailedStatisticsAllocation(inoutStats, block->size);
10246     }
10247 }
10248 
AddStatistics(VmaStatistics & inoutStats)10249 void VmaBlockMetadata_TLSF::AddStatistics(VmaStatistics& inoutStats) const
10250 {
10251     inoutStats.blockCount++;
10252     inoutStats.allocationCount += (uint32_t)m_AllocCount;
10253     inoutStats.blockBytes += GetSize();
10254     inoutStats.allocationBytes += GetSize() - GetSumFreeSize();
10255 }
10256 
10257 #if VMA_STATS_STRING_ENABLED
PrintDetailedMap(class VmaJsonWriter & json)10258 void VmaBlockMetadata_TLSF::PrintDetailedMap(class VmaJsonWriter& json) const
10259 {
10260     size_t blockCount = m_AllocCount + m_BlocksFreeCount;
10261     VmaStlAllocator<Block*> allocator(GetAllocationCallbacks());
10262     VmaVector<Block*, VmaStlAllocator<Block*>> blockList(blockCount, allocator);
10263 
10264     size_t i = blockCount;
10265     for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
10266     {
10267         blockList[--i] = block;
10268     }
10269     VMA_ASSERT(i == 0);
10270 
10271     VmaDetailedStatistics stats;
10272     VmaClearDetailedStatistics(stats);
10273     AddDetailedStatistics(stats);
10274 
10275     PrintDetailedMap_Begin(json,
10276         stats.statistics.blockBytes - stats.statistics.allocationBytes,
10277         stats.statistics.allocationCount,
10278         stats.unusedRangeCount);
10279 
10280     for (; i < blockCount; ++i)
10281     {
10282         Block* block = blockList[i];
10283         if (block->IsFree())
10284             PrintDetailedMap_UnusedRange(json, block->offset, block->size);
10285         else
10286             PrintDetailedMap_Allocation(json, block->offset, block->size, block->UserData());
10287     }
10288     if (m_NullBlock->size > 0)
10289         PrintDetailedMap_UnusedRange(json, m_NullBlock->offset, m_NullBlock->size);
10290 
10291     PrintDetailedMap_End(json);
10292 }
10293 #endif
10294 
CreateAllocationRequest(VkDeviceSize allocSize,VkDeviceSize allocAlignment,bool upperAddress,VmaSuballocationType allocType,uint32_t strategy,VmaAllocationRequest * pAllocationRequest)10295 bool VmaBlockMetadata_TLSF::CreateAllocationRequest(
10296     VkDeviceSize allocSize,
10297     VkDeviceSize allocAlignment,
10298     bool upperAddress,
10299     VmaSuballocationType allocType,
10300     uint32_t strategy,
10301     VmaAllocationRequest* pAllocationRequest)
10302 {
10303     VMA_ASSERT(allocSize > 0 && "Cannot allocate empty block!");
10304     VMA_ASSERT(!upperAddress && "VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT can be used only with linear algorithm.");
10305 
10306     // For small granularity round up
10307     if (!IsVirtual())
10308         m_GranularityHandler.RoundupAllocRequest(allocType, allocSize, allocAlignment);
10309 
10310     allocSize += GetDebugMargin();
10311     // Quick check for too small pool
10312     if (allocSize > GetSumFreeSize())
10313         return false;
10314 
10315     // If no free blocks in pool then check only null block
10316     if (m_BlocksFreeCount == 0)
10317         return CheckBlock(*m_NullBlock, m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest);
10318 
10319     // Round up to the next block
10320     VkDeviceSize sizeForNextList = allocSize;
10321     VkDeviceSize smallSizeStep = SMALL_BUFFER_SIZE / (IsVirtual() ? 1 << SECOND_LEVEL_INDEX : 4);
10322     if (allocSize > SMALL_BUFFER_SIZE)
10323     {
10324         sizeForNextList += (1ULL << (VMA_BITSCAN_MSB(allocSize) - SECOND_LEVEL_INDEX));
10325     }
10326     else if (allocSize > SMALL_BUFFER_SIZE - smallSizeStep)
10327         sizeForNextList = SMALL_BUFFER_SIZE + 1;
10328     else
10329         sizeForNextList += smallSizeStep;
10330 
10331     uint32_t nextListIndex = 0;
10332     uint32_t prevListIndex = 0;
10333     Block* nextListBlock = VMA_NULL;
10334     Block* prevListBlock = VMA_NULL;
10335 
10336     // Check blocks according to strategies
10337     if (strategy & VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT)
10338     {
10339         // Quick check for larger block first
10340         nextListBlock = FindFreeBlock(sizeForNextList, nextListIndex);
10341         if (nextListBlock != VMA_NULL && CheckBlock(*nextListBlock, nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
10342             return true;
10343 
10344         // If not fitted then null block
10345         if (CheckBlock(*m_NullBlock, m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
10346             return true;
10347 
10348         // Null block failed, search larger bucket
10349         while (nextListBlock)
10350         {
10351             if (CheckBlock(*nextListBlock, nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
10352                 return true;
10353             nextListBlock = nextListBlock->NextFree();
10354         }
10355 
10356         // Failed again, check best fit bucket
10357         prevListBlock = FindFreeBlock(allocSize, prevListIndex);
10358         while (prevListBlock)
10359         {
10360             if (CheckBlock(*prevListBlock, prevListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
10361                 return true;
10362             prevListBlock = prevListBlock->NextFree();
10363         }
10364     }
10365     else if (strategy & VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT)
10366     {
10367         // Check best fit bucket
10368         prevListBlock = FindFreeBlock(allocSize, prevListIndex);
10369         while (prevListBlock)
10370         {
10371             if (CheckBlock(*prevListBlock, prevListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
10372                 return true;
10373             prevListBlock = prevListBlock->NextFree();
10374         }
10375 
10376         // If failed check null block
10377         if (CheckBlock(*m_NullBlock, m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
10378             return true;
10379 
10380         // Check larger bucket
10381         nextListBlock = FindFreeBlock(sizeForNextList, nextListIndex);
10382         while (nextListBlock)
10383         {
10384             if (CheckBlock(*nextListBlock, nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
10385                 return true;
10386             nextListBlock = nextListBlock->NextFree();
10387         }
10388     }
10389     else if (strategy & VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT )
10390     {
10391         // Perform search from the start
10392         VmaStlAllocator<Block*> allocator(GetAllocationCallbacks());
10393         VmaVector<Block*, VmaStlAllocator<Block*>> blockList(m_BlocksFreeCount, allocator);
10394 
10395         size_t i = m_BlocksFreeCount;
10396         for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
10397         {
10398             if (block->IsFree() && block->size >= allocSize)
10399                 blockList[--i] = block;
10400         }
10401 
10402         for (; i < m_BlocksFreeCount; ++i)
10403         {
10404             Block& block = *blockList[i];
10405             if (CheckBlock(block, GetListIndex(block.size), allocSize, allocAlignment, allocType, pAllocationRequest))
10406                 return true;
10407         }
10408 
10409         // If failed check null block
10410         if (CheckBlock(*m_NullBlock, m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
10411             return true;
10412 
10413         // Whole range searched, no more memory
10414         return false;
10415     }
10416     else
10417     {
10418         // Check larger bucket
10419         nextListBlock = FindFreeBlock(sizeForNextList, nextListIndex);
10420         while (nextListBlock)
10421         {
10422             if (CheckBlock(*nextListBlock, nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
10423                 return true;
10424             nextListBlock = nextListBlock->NextFree();
10425         }
10426 
10427         // If failed check null block
10428         if (CheckBlock(*m_NullBlock, m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
10429             return true;
10430 
10431         // Check best fit bucket
10432         prevListBlock = FindFreeBlock(allocSize, prevListIndex);
10433         while (prevListBlock)
10434         {
10435             if (CheckBlock(*prevListBlock, prevListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
10436                 return true;
10437             prevListBlock = prevListBlock->NextFree();
10438         }
10439     }
10440 
10441     // Worst case, full search has to be done
10442     while (++nextListIndex < m_ListsCount)
10443     {
10444         nextListBlock = m_FreeList[nextListIndex];
10445         while (nextListBlock)
10446         {
10447             if (CheckBlock(*nextListBlock, nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
10448                 return true;
10449             nextListBlock = nextListBlock->NextFree();
10450         }
10451     }
10452 
10453     // No more memory sadly
10454     return false;
10455 }
10456 
CheckCorruption(const void * pBlockData)10457 VkResult VmaBlockMetadata_TLSF::CheckCorruption(const void* pBlockData)
10458 {
10459     for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
10460     {
10461         if (!block->IsFree())
10462         {
10463             if (!VmaValidateMagicValue(pBlockData, block->offset + block->size))
10464             {
10465                 VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
10466                 return VK_ERROR_UNKNOWN_COPY;
10467             }
10468         }
10469     }
10470 
10471     return VK_SUCCESS;
10472 }
10473 
Alloc(const VmaAllocationRequest & request,VmaSuballocationType type,void * userData)10474 void VmaBlockMetadata_TLSF::Alloc(
10475     const VmaAllocationRequest& request,
10476     VmaSuballocationType type,
10477     void* userData)
10478 {
10479     VMA_ASSERT(request.type == VmaAllocationRequestType::TLSF);
10480 
10481     // Get block and pop it from the free list
10482     Block* currentBlock = (Block*)request.allocHandle;
10483     VkDeviceSize offset = request.algorithmData;
10484     VMA_ASSERT(currentBlock != VMA_NULL);
10485     VMA_ASSERT(currentBlock->offset <= offset);
10486 
10487     if (currentBlock != m_NullBlock)
10488         RemoveFreeBlock(currentBlock);
10489 
10490     VkDeviceSize debugMargin = GetDebugMargin();
10491     VkDeviceSize misssingAlignment = offset - currentBlock->offset;
10492 
10493     // Append missing alignment to prev block or create new one
10494     if (misssingAlignment)
10495     {
10496         Block* prevBlock = currentBlock->prevPhysical;
10497         VMA_ASSERT(prevBlock != VMA_NULL && "There should be no missing alignment at offset 0!");
10498 
10499         if (prevBlock->IsFree() && prevBlock->size != debugMargin)
10500         {
10501             uint32_t oldList = GetListIndex(prevBlock->size);
10502             prevBlock->size += misssingAlignment;
10503             // Check if new size crosses list bucket
10504             if (oldList != GetListIndex(prevBlock->size))
10505             {
10506                 prevBlock->size -= misssingAlignment;
10507                 RemoveFreeBlock(prevBlock);
10508                 prevBlock->size += misssingAlignment;
10509                 InsertFreeBlock(prevBlock);
10510             }
10511             else
10512                 m_BlocksFreeSize += misssingAlignment;
10513         }
10514         else
10515         {
10516             Block* newBlock = m_BlockAllocator.Alloc();
10517             currentBlock->prevPhysical = newBlock;
10518             prevBlock->nextPhysical = newBlock;
10519             newBlock->prevPhysical = prevBlock;
10520             newBlock->nextPhysical = currentBlock;
10521             newBlock->size = misssingAlignment;
10522             newBlock->offset = currentBlock->offset;
10523             newBlock->MarkTaken();
10524 
10525             InsertFreeBlock(newBlock);
10526         }
10527 
10528         currentBlock->size -= misssingAlignment;
10529         currentBlock->offset += misssingAlignment;
10530     }
10531 
10532     VkDeviceSize size = request.size + debugMargin;
10533     if (currentBlock->size == size)
10534     {
10535         if (currentBlock == m_NullBlock)
10536         {
10537             // Setup new null block
10538             m_NullBlock = m_BlockAllocator.Alloc();
10539             m_NullBlock->size = 0;
10540             m_NullBlock->offset = currentBlock->offset + size;
10541             m_NullBlock->prevPhysical = currentBlock;
10542             m_NullBlock->nextPhysical = VMA_NULL;
10543             m_NullBlock->MarkFree();
10544             m_NullBlock->PrevFree() = VMA_NULL;
10545             m_NullBlock->NextFree() = VMA_NULL;
10546             currentBlock->nextPhysical = m_NullBlock;
10547             currentBlock->MarkTaken();
10548         }
10549     }
10550     else
10551     {
10552         VMA_ASSERT(currentBlock->size > size && "Proper block already found, shouldn't find smaller one!");
10553 
10554         // Create new free block
10555         Block* newBlock = m_BlockAllocator.Alloc();
10556         newBlock->size = currentBlock->size - size;
10557         newBlock->offset = currentBlock->offset + size;
10558         newBlock->prevPhysical = currentBlock;
10559         newBlock->nextPhysical = currentBlock->nextPhysical;
10560         currentBlock->nextPhysical = newBlock;
10561         currentBlock->size = size;
10562 
10563         if (currentBlock == m_NullBlock)
10564         {
10565             m_NullBlock = newBlock;
10566             m_NullBlock->MarkFree();
10567             m_NullBlock->NextFree() = VMA_NULL;
10568             m_NullBlock->PrevFree() = VMA_NULL;
10569             currentBlock->MarkTaken();
10570         }
10571         else
10572         {
10573             newBlock->nextPhysical->prevPhysical = newBlock;
10574             newBlock->MarkTaken();
10575             InsertFreeBlock(newBlock);
10576         }
10577     }
10578     currentBlock->UserData() = userData;
10579 
10580     if (debugMargin > 0)
10581     {
10582         currentBlock->size -= debugMargin;
10583         Block* newBlock = m_BlockAllocator.Alloc();
10584         newBlock->size = debugMargin;
10585         newBlock->offset = currentBlock->offset + currentBlock->size;
10586         newBlock->prevPhysical = currentBlock;
10587         newBlock->nextPhysical = currentBlock->nextPhysical;
10588         newBlock->MarkTaken();
10589         currentBlock->nextPhysical->prevPhysical = newBlock;
10590         currentBlock->nextPhysical = newBlock;
10591         InsertFreeBlock(newBlock);
10592     }
10593 
10594     if (!IsVirtual())
10595         m_GranularityHandler.AllocPages((uint8_t)(uintptr_t)request.customData,
10596             currentBlock->offset, currentBlock->size);
10597     ++m_AllocCount;
10598 }
10599 
Free(VmaAllocHandle allocHandle)10600 void VmaBlockMetadata_TLSF::Free(VmaAllocHandle allocHandle)
10601 {
10602     Block* block = (Block*)allocHandle;
10603     Block* next = block->nextPhysical;
10604     VMA_ASSERT(!block->IsFree() && "Block is already free!");
10605 
10606     if (!IsVirtual())
10607         m_GranularityHandler.FreePages(block->offset, block->size);
10608     --m_AllocCount;
10609 
10610     VkDeviceSize debugMargin = GetDebugMargin();
10611     if (debugMargin > 0)
10612     {
10613         RemoveFreeBlock(next);
10614         MergeBlock(next, block);
10615         block = next;
10616         next = next->nextPhysical;
10617     }
10618 
10619     // Try merging
10620     Block* prev = block->prevPhysical;
10621     if (prev != VMA_NULL && prev->IsFree() && prev->size != debugMargin)
10622     {
10623         RemoveFreeBlock(prev);
10624         MergeBlock(block, prev);
10625     }
10626 
10627     if (!next->IsFree())
10628         InsertFreeBlock(block);
10629     else if (next == m_NullBlock)
10630         MergeBlock(m_NullBlock, block);
10631     else
10632     {
10633         RemoveFreeBlock(next);
10634         MergeBlock(next, block);
10635         InsertFreeBlock(next);
10636     }
10637 }
10638 
GetAllocationInfo(VmaAllocHandle allocHandle,VmaVirtualAllocationInfo & outInfo)10639 void VmaBlockMetadata_TLSF::GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo)
10640 {
10641     Block* block = (Block*)allocHandle;
10642     VMA_ASSERT(!block->IsFree() && "Cannot get allocation info for free block!");
10643     outInfo.offset = block->offset;
10644     outInfo.size = block->size;
10645     outInfo.pUserData = block->UserData();
10646 }
10647 
GetAllocationUserData(VmaAllocHandle allocHandle)10648 void* VmaBlockMetadata_TLSF::GetAllocationUserData(VmaAllocHandle allocHandle) const
10649 {
10650     Block* block = (Block*)allocHandle;
10651     VMA_ASSERT(!block->IsFree() && "Cannot get user data for free block!");
10652     return block->UserData();
10653 }
10654 
GetAllocationListBegin()10655 VmaAllocHandle VmaBlockMetadata_TLSF::GetAllocationListBegin() const
10656 {
10657     if (m_AllocCount == 0)
10658         return VK_NULL_HANDLE;
10659 
10660     for (Block* block = m_NullBlock->prevPhysical; block; block = block->prevPhysical)
10661     {
10662         if (!block->IsFree())
10663             return (VmaAllocHandle)block;
10664     }
10665     VMA_ASSERT(false && "If m_AllocCount > 0 then should find any allocation!");
10666     return VK_NULL_HANDLE;
10667 }
10668 
GetNextAllocation(VmaAllocHandle prevAlloc)10669 VmaAllocHandle VmaBlockMetadata_TLSF::GetNextAllocation(VmaAllocHandle prevAlloc) const
10670 {
10671     Block* startBlock = (Block*)prevAlloc;
10672     VMA_ASSERT(!startBlock->IsFree() && "Incorrect block!");
10673 
10674     for (Block* block = startBlock->prevPhysical; block; block = block->prevPhysical)
10675     {
10676         if (!block->IsFree())
10677             return (VmaAllocHandle)block;
10678     }
10679     return VK_NULL_HANDLE;
10680 }
10681 
GetNextFreeRegionSize(VmaAllocHandle alloc)10682 VkDeviceSize VmaBlockMetadata_TLSF::GetNextFreeRegionSize(VmaAllocHandle alloc) const
10683 {
10684     Block* block = (Block*)alloc;
10685     VMA_ASSERT(!block->IsFree() && "Incorrect block!");
10686 
10687     if (block->prevPhysical)
10688         return block->prevPhysical->IsFree() ? block->prevPhysical->size : 0;
10689     return 0;
10690 }
10691 
Clear()10692 void VmaBlockMetadata_TLSF::Clear()
10693 {
10694     m_AllocCount = 0;
10695     m_BlocksFreeCount = 0;
10696     m_BlocksFreeSize = 0;
10697     m_IsFreeBitmap = 0;
10698     m_NullBlock->offset = 0;
10699     m_NullBlock->size = GetSize();
10700     Block* block = m_NullBlock->prevPhysical;
10701     m_NullBlock->prevPhysical = VMA_NULL;
10702     while (block)
10703     {
10704         Block* prev = block->prevPhysical;
10705         m_BlockAllocator.Free(block);
10706         block = prev;
10707     }
10708     memset(m_FreeList, 0, m_ListsCount * sizeof(Block*));
10709     memset(m_InnerIsFreeBitmap, 0, m_MemoryClasses * sizeof(uint32_t));
10710     m_GranularityHandler.Clear();
10711 }
10712 
SetAllocationUserData(VmaAllocHandle allocHandle,void * userData)10713 void VmaBlockMetadata_TLSF::SetAllocationUserData(VmaAllocHandle allocHandle, void* userData)
10714 {
10715     Block* block = (Block*)allocHandle;
10716     VMA_ASSERT(!block->IsFree() && "Trying to set user data for not allocated block!");
10717     block->UserData() = userData;
10718 }
10719 
DebugLogAllAllocations()10720 void VmaBlockMetadata_TLSF::DebugLogAllAllocations() const
10721 {
10722     for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
10723         if (!block->IsFree())
10724             DebugLogAllocation(block->offset, block->size, block->UserData());
10725 }
10726 
SizeToMemoryClass(VkDeviceSize size)10727 uint8_t VmaBlockMetadata_TLSF::SizeToMemoryClass(VkDeviceSize size) const
10728 {
10729     if (size > SMALL_BUFFER_SIZE)
10730         return VMA_BITSCAN_MSB(size) - MEMORY_CLASS_SHIFT;
10731     return 0;
10732 }
10733 
SizeToSecondIndex(VkDeviceSize size,uint8_t memoryClass)10734 uint16_t VmaBlockMetadata_TLSF::SizeToSecondIndex(VkDeviceSize size, uint8_t memoryClass) const
10735 {
10736     if (memoryClass == 0)
10737     {
10738         if (IsVirtual())
10739             return static_cast<uint16_t>((size - 1) / 8);
10740         else
10741             return static_cast<uint16_t>((size - 1) / 64);
10742     }
10743     return static_cast<uint16_t>((size >> (memoryClass + MEMORY_CLASS_SHIFT - SECOND_LEVEL_INDEX)) ^ (1U << SECOND_LEVEL_INDEX));
10744 }
10745 
GetListIndex(uint8_t memoryClass,uint16_t secondIndex)10746 uint32_t VmaBlockMetadata_TLSF::GetListIndex(uint8_t memoryClass, uint16_t secondIndex) const
10747 {
10748     if (memoryClass == 0)
10749         return secondIndex;
10750 
10751     const uint32_t index = static_cast<uint32_t>(memoryClass - 1) * (1 << SECOND_LEVEL_INDEX) + secondIndex;
10752     if (IsVirtual())
10753         return index + (1 << SECOND_LEVEL_INDEX);
10754     else
10755         return index + 4;
10756 }
10757 
GetListIndex(VkDeviceSize size)10758 uint32_t VmaBlockMetadata_TLSF::GetListIndex(VkDeviceSize size) const
10759 {
10760     uint8_t memoryClass = SizeToMemoryClass(size);
10761     return GetListIndex(memoryClass, SizeToSecondIndex(size, memoryClass));
10762 }
10763 
RemoveFreeBlock(Block * block)10764 void VmaBlockMetadata_TLSF::RemoveFreeBlock(Block* block)
10765 {
10766     VMA_ASSERT(block != m_NullBlock);
10767     VMA_ASSERT(block->IsFree());
10768 
10769     if (block->NextFree() != VMA_NULL)
10770         block->NextFree()->PrevFree() = block->PrevFree();
10771     if (block->PrevFree() != VMA_NULL)
10772         block->PrevFree()->NextFree() = block->NextFree();
10773     else
10774     {
10775         uint8_t memClass = SizeToMemoryClass(block->size);
10776         uint16_t secondIndex = SizeToSecondIndex(block->size, memClass);
10777         uint32_t index = GetListIndex(memClass, secondIndex);
10778         VMA_ASSERT(m_FreeList[index] == block);
10779         m_FreeList[index] = block->NextFree();
10780         if (block->NextFree() == VMA_NULL)
10781         {
10782             m_InnerIsFreeBitmap[memClass] &= ~(1U << secondIndex);
10783             if (m_InnerIsFreeBitmap[memClass] == 0)
10784                 m_IsFreeBitmap &= ~(1UL << memClass);
10785         }
10786     }
10787     block->MarkTaken();
10788     block->UserData() = VMA_NULL;
10789     --m_BlocksFreeCount;
10790     m_BlocksFreeSize -= block->size;
10791 }
10792 
InsertFreeBlock(Block * block)10793 void VmaBlockMetadata_TLSF::InsertFreeBlock(Block* block)
10794 {
10795     VMA_ASSERT(block != m_NullBlock);
10796     VMA_ASSERT(!block->IsFree() && "Cannot insert block twice!");
10797 
10798     uint8_t memClass = SizeToMemoryClass(block->size);
10799     uint16_t secondIndex = SizeToSecondIndex(block->size, memClass);
10800     uint32_t index = GetListIndex(memClass, secondIndex);
10801     VMA_ASSERT(index < m_ListsCount);
10802     block->PrevFree() = VMA_NULL;
10803     block->NextFree() = m_FreeList[index];
10804     m_FreeList[index] = block;
10805     if (block->NextFree() != VMA_NULL)
10806         block->NextFree()->PrevFree() = block;
10807     else
10808     {
10809         m_InnerIsFreeBitmap[memClass] |= 1U << secondIndex;
10810         m_IsFreeBitmap |= 1UL << memClass;
10811     }
10812     ++m_BlocksFreeCount;
10813     m_BlocksFreeSize += block->size;
10814 }
10815 
MergeBlock(Block * block,Block * prev)10816 void VmaBlockMetadata_TLSF::MergeBlock(Block* block, Block* prev)
10817 {
10818     VMA_ASSERT(block->prevPhysical == prev && "Cannot merge seperate physical regions!");
10819     VMA_ASSERT(!prev->IsFree() && "Cannot merge block that belongs to free list!");
10820 
10821     block->offset = prev->offset;
10822     block->size += prev->size;
10823     block->prevPhysical = prev->prevPhysical;
10824     if (block->prevPhysical)
10825         block->prevPhysical->nextPhysical = block;
10826     m_BlockAllocator.Free(prev);
10827 }
10828 
FindFreeBlock(VkDeviceSize size,uint32_t & listIndex)10829 VmaBlockMetadata_TLSF::Block* VmaBlockMetadata_TLSF::FindFreeBlock(VkDeviceSize size, uint32_t& listIndex) const
10830 {
10831     uint8_t memoryClass = SizeToMemoryClass(size);
10832     uint32_t innerFreeMap = m_InnerIsFreeBitmap[memoryClass] & (~0U << SizeToSecondIndex(size, memoryClass));
10833     if (!innerFreeMap)
10834     {
10835         // Check higher levels for avaiable blocks
10836         uint32_t freeMap = m_IsFreeBitmap & (~0UL << (memoryClass + 1));
10837         if (!freeMap)
10838             return VMA_NULL; // No more memory avaible
10839 
10840         // Find lowest free region
10841         memoryClass = VMA_BITSCAN_LSB(freeMap);
10842         innerFreeMap = m_InnerIsFreeBitmap[memoryClass];
10843         VMA_ASSERT(innerFreeMap != 0);
10844     }
10845     // Find lowest free subregion
10846     listIndex = GetListIndex(memoryClass, VMA_BITSCAN_LSB(innerFreeMap));
10847     VMA_ASSERT(m_FreeList[listIndex]);
10848     return m_FreeList[listIndex];
10849 }
10850 
CheckBlock(Block & block,uint32_t listIndex,VkDeviceSize allocSize,VkDeviceSize allocAlignment,VmaSuballocationType allocType,VmaAllocationRequest * pAllocationRequest)10851 bool VmaBlockMetadata_TLSF::CheckBlock(
10852     Block& block,
10853     uint32_t listIndex,
10854     VkDeviceSize allocSize,
10855     VkDeviceSize allocAlignment,
10856     VmaSuballocationType allocType,
10857     VmaAllocationRequest* pAllocationRequest)
10858 {
10859     VMA_ASSERT(block.IsFree() && "Block is already taken!");
10860 
10861     VkDeviceSize alignedOffset = VmaAlignUp(block.offset, allocAlignment);
10862     if (block.size < allocSize + alignedOffset - block.offset)
10863         return false;
10864 
10865     // Check for granularity conflicts
10866     if (!IsVirtual() &&
10867         m_GranularityHandler.CheckConflictAndAlignUp(alignedOffset, allocSize, block.offset, block.size, allocType))
10868         return false;
10869 
10870     // Alloc successful
10871     pAllocationRequest->type = VmaAllocationRequestType::TLSF;
10872     pAllocationRequest->allocHandle = (VmaAllocHandle)&block;
10873     pAllocationRequest->size = allocSize - GetDebugMargin();
10874     pAllocationRequest->customData = (void*)allocType;
10875     pAllocationRequest->algorithmData = alignedOffset;
10876 
10877     // Place block at the start of list if it's normal block
10878     if (listIndex != m_ListsCount && block.PrevFree())
10879     {
10880         block.PrevFree()->NextFree() = block.NextFree();
10881         if (block.NextFree())
10882             block.NextFree()->PrevFree() = block.PrevFree();
10883         block.PrevFree() = VMA_NULL;
10884         block.NextFree() = m_FreeList[listIndex];
10885         m_FreeList[listIndex] = &block;
10886         if (block.NextFree())
10887             block.NextFree()->PrevFree() = &block;
10888     }
10889 
10890     return true;
10891 }
10892 #endif // _VMA_BLOCK_METADATA_TLSF_FUNCTIONS
10893 #endif // _VMA_BLOCK_METADATA_TLSF
10894 
10895 #ifndef _VMA_BLOCK_VECTOR
10896 /*
10897 Sequence of VmaDeviceMemoryBlock. Represents memory blocks allocated for a specific
10898 Vulkan memory type.
10899 
10900 Synchronized internally with a mutex.
10901 */
10902 class VmaBlockVector
10903 {
10904     friend struct VmaDefragmentationContext_T;
10905     VMA_CLASS_NO_COPY(VmaBlockVector)
10906 public:
10907     VmaBlockVector(
10908         VmaAllocator hAllocator,
10909         VmaPool hParentPool,
10910         uint32_t memoryTypeIndex,
10911         VkDeviceSize preferredBlockSize,
10912         size_t minBlockCount,
10913         size_t maxBlockCount,
10914         VkDeviceSize bufferImageGranularity,
10915         bool explicitBlockSize,
10916         uint32_t algorithm,
10917         float priority,
10918         VkDeviceSize minAllocationAlignment,
10919         void* pMemoryAllocateNext);
10920     ~VmaBlockVector();
10921 
GetAllocator()10922     VmaAllocator GetAllocator() const { return m_hAllocator; }
GetParentPool()10923     VmaPool GetParentPool() const { return m_hParentPool; }
IsCustomPool()10924     bool IsCustomPool() const { return m_hParentPool != VMA_NULL; }
GetMemoryTypeIndex()10925     uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
GetPreferredBlockSize()10926     VkDeviceSize GetPreferredBlockSize() const { return m_PreferredBlockSize; }
GetBufferImageGranularity()10927     VkDeviceSize GetBufferImageGranularity() const { return m_BufferImageGranularity; }
GetAlgorithm()10928     uint32_t GetAlgorithm() const { return m_Algorithm; }
HasExplicitBlockSize()10929     bool HasExplicitBlockSize() const { return m_ExplicitBlockSize; }
GetPriority()10930     float GetPriority() const { return m_Priority; }
GetAllocationNextPtr()10931     const void* GetAllocationNextPtr() const { return m_pMemoryAllocateNext; }
10932     // To be used only while the m_Mutex is locked. Used during defragmentation.
GetBlockCount()10933     size_t GetBlockCount() const { return m_Blocks.size(); }
10934     // To be used only while the m_Mutex is locked. Used during defragmentation.
GetBlock(size_t index)10935     VmaDeviceMemoryBlock* GetBlock(size_t index) const { return m_Blocks[index]; }
GetMutex()10936     VMA_RW_MUTEX &GetMutex() { return m_Mutex; }
10937 
10938     VkResult CreateMinBlocks();
10939     void AddStatistics(VmaStatistics& inoutStats);
10940     void AddDetailedStatistics(VmaDetailedStatistics& inoutStats);
10941     bool IsEmpty();
10942     bool IsCorruptionDetectionEnabled() const;
10943 
10944     VkResult Allocate(
10945         VkDeviceSize size,
10946         VkDeviceSize alignment,
10947         const VmaAllocationCreateInfo& createInfo,
10948         VmaSuballocationType suballocType,
10949         size_t allocationCount,
10950         VmaAllocation* pAllocations);
10951 
10952     void Free(const VmaAllocation hAllocation);
10953 
10954 #if VMA_STATS_STRING_ENABLED
10955     void PrintDetailedMap(class VmaJsonWriter& json);
10956 #endif
10957 
10958     VkResult CheckCorruption();
10959 
10960 private:
10961     const VmaAllocator m_hAllocator;
10962     const VmaPool m_hParentPool;
10963     const uint32_t m_MemoryTypeIndex;
10964     const VkDeviceSize m_PreferredBlockSize;
10965     const size_t m_MinBlockCount;
10966     const size_t m_MaxBlockCount;
10967     const VkDeviceSize m_BufferImageGranularity;
10968     const bool m_ExplicitBlockSize;
10969     const uint32_t m_Algorithm;
10970     const float m_Priority;
10971     const VkDeviceSize m_MinAllocationAlignment;
10972 
10973     void* const m_pMemoryAllocateNext;
10974     VMA_RW_MUTEX m_Mutex;
10975     // Incrementally sorted by sumFreeSize, ascending.
10976     VmaVector<VmaDeviceMemoryBlock*, VmaStlAllocator<VmaDeviceMemoryBlock*>> m_Blocks;
10977     uint32_t m_NextBlockId;
10978     bool m_IncrementalSort = true;
10979 
SetIncrementalSort(bool val)10980     void SetIncrementalSort(bool val) { m_IncrementalSort = val; }
10981 
10982     VkDeviceSize CalcMaxBlockSize() const;
10983     // Finds and removes given block from vector.
10984     void Remove(VmaDeviceMemoryBlock* pBlock);
10985     // Performs single step in sorting m_Blocks. They may not be fully sorted
10986     // after this call.
10987     void IncrementallySortBlocks();
10988     void SortByFreeSize();
10989 
10990     VkResult AllocatePage(
10991         VkDeviceSize size,
10992         VkDeviceSize alignment,
10993         const VmaAllocationCreateInfo& createInfo,
10994         VmaSuballocationType suballocType,
10995         VmaAllocation* pAllocation);
10996 
10997     VkResult AllocateFromBlock(
10998         VmaDeviceMemoryBlock* pBlock,
10999         VkDeviceSize size,
11000         VkDeviceSize alignment,
11001         VmaAllocationCreateFlags allocFlags,
11002         void* pUserData,
11003         VmaSuballocationType suballocType,
11004         uint32_t strategy,
11005         VmaAllocation* pAllocation);
11006 
11007     VkResult CommitAllocationRequest(
11008         VmaAllocationRequest& allocRequest,
11009         VmaDeviceMemoryBlock* pBlock,
11010         VkDeviceSize alignment,
11011         VmaAllocationCreateFlags allocFlags,
11012         void* pUserData,
11013         VmaSuballocationType suballocType,
11014         VmaAllocation* pAllocation);
11015 
11016     VkResult CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex);
11017     bool HasEmptyBlock();
11018 };
11019 #endif // _VMA_BLOCK_VECTOR
11020 
11021 #ifndef _VMA_DEFRAGMENTATION_CONTEXT
11022 struct VmaDefragmentationContext_T
11023 {
11024     VMA_CLASS_NO_COPY(VmaDefragmentationContext_T)
11025 public:
11026     VmaDefragmentationContext_T(
11027         VmaAllocator hAllocator,
11028         const VmaDefragmentationInfo& info);
11029     ~VmaDefragmentationContext_T();
11030 
GetStatsVmaDefragmentationContext_T11031     void GetStats(VmaDefragmentationStats& outStats) { outStats = m_GlobalStats; }
11032 
11033     VkResult DefragmentPassBegin(VmaDefragmentationPassMoveInfo& moveInfo);
11034     VkResult DefragmentPassEnd(VmaDefragmentationPassMoveInfo& moveInfo);
11035 
11036 private:
11037     // Max number of allocations to ignore due to size constraints before ending single pass
11038     static const uint8_t MAX_ALLOCS_TO_IGNORE = 16;
11039     enum class CounterStatus { Pass, Ignore, End };
11040 
11041     struct FragmentedBlock
11042     {
11043         uint32_t data;
11044         VmaDeviceMemoryBlock* block;
11045     };
11046     struct StateBalanced
11047     {
11048         VkDeviceSize avgFreeSize = 0;
11049         VkDeviceSize avgAllocSize = UINT64_MAX;
11050     };
11051     struct StateExtensive
11052     {
11053         enum class Operation : uint8_t
11054         {
11055             FindFreeBlockBuffer, FindFreeBlockTexture, FindFreeBlockAll,
11056             MoveBuffers, MoveTextures, MoveAll,
11057             Cleanup, Done
11058         };
11059 
11060         Operation operation = Operation::FindFreeBlockTexture;
11061         size_t firstFreeBlock = SIZE_MAX;
11062     };
11063     struct MoveAllocationData
11064     {
11065         VkDeviceSize size;
11066         VkDeviceSize alignment;
11067         VmaSuballocationType type;
11068         VmaAllocationCreateFlags flags;
11069         VmaDefragmentationMove move = {};
11070     };
11071 
11072     const VkDeviceSize m_MaxPassBytes;
11073     const uint32_t m_MaxPassAllocations;
11074 
11075     VmaStlAllocator<VmaDefragmentationMove> m_MoveAllocator;
11076     VmaVector<VmaDefragmentationMove, VmaStlAllocator<VmaDefragmentationMove>> m_Moves;
11077 
11078     uint8_t m_IgnoredAllocs = 0;
11079     uint32_t m_Algorithm;
11080     uint32_t m_BlockVectorCount;
11081     VmaBlockVector* m_PoolBlockVector;
11082     VmaBlockVector** m_pBlockVectors;
11083     size_t m_ImmovableBlockCount = 0;
11084     VmaDefragmentationStats m_GlobalStats = { 0 };
11085     VmaDefragmentationStats m_PassStats = { 0 };
11086     void* m_AlgorithmState = VMA_NULL;
11087 
11088     static MoveAllocationData GetMoveData(VmaAllocHandle handle, VmaBlockMetadata* metadata);
11089     CounterStatus CheckCounters(VkDeviceSize bytes);
11090     bool IncrementCounters(VkDeviceSize bytes);
11091     bool ReallocWithinBlock(VmaBlockVector& vector, VmaDeviceMemoryBlock* block);
11092     bool AllocInOtherBlock(size_t start, size_t end, MoveAllocationData& data, VmaBlockVector& vector);
11093 
11094     bool ComputeDefragmentation(VmaBlockVector& vector, size_t index);
11095     bool ComputeDefragmentation_Fast(VmaBlockVector& vector);
11096     bool ComputeDefragmentation_Balanced(VmaBlockVector& vector, size_t index, bool update);
11097     bool ComputeDefragmentation_Full(VmaBlockVector& vector);
11098     bool ComputeDefragmentation_Extensive(VmaBlockVector& vector, size_t index);
11099 
11100     void UpdateVectorStatistics(VmaBlockVector& vector, StateBalanced& state);
11101     bool MoveDataToFreeBlocks(VmaSuballocationType currentType,
11102         VmaBlockVector& vector, size_t firstFreeBlock,
11103         bool& texturePresent, bool& bufferPresent, bool& otherPresent);
11104 };
11105 #endif // _VMA_DEFRAGMENTATION_CONTEXT
11106 
11107 #ifndef _VMA_POOL_T
11108 struct VmaPool_T
11109 {
11110     friend struct VmaPoolListItemTraits;
11111     VMA_CLASS_NO_COPY(VmaPool_T)
11112 public:
11113     VmaBlockVector m_BlockVector;
11114     VmaDedicatedAllocationList m_DedicatedAllocations;
11115 
11116     VmaPool_T(
11117         VmaAllocator hAllocator,
11118         const VmaPoolCreateInfo& createInfo,
11119         VkDeviceSize preferredBlockSize);
11120     ~VmaPool_T();
11121 
GetIdVmaPool_T11122     uint32_t GetId() const { return m_Id; }
SetIdVmaPool_T11123     void SetId(uint32_t id) { VMA_ASSERT(m_Id == 0); m_Id = id; }
11124 
GetNameVmaPool_T11125     const char* GetName() const { return m_Name; }
11126     void SetName(const char* pName);
11127 
11128 #if VMA_STATS_STRING_ENABLED
11129     //void PrintDetailedMap(class VmaStringBuilder& sb);
11130 #endif
11131 
11132 private:
11133     uint32_t m_Id;
11134     char* m_Name;
11135     VmaPool_T* m_PrevPool = VMA_NULL;
11136     VmaPool_T* m_NextPool = VMA_NULL;
11137 };
11138 
11139 struct VmaPoolListItemTraits
11140 {
11141     typedef VmaPool_T ItemType;
11142 
GetPrevVmaPoolListItemTraits11143     static ItemType* GetPrev(const ItemType* item) { return item->m_PrevPool; }
GetNextVmaPoolListItemTraits11144     static ItemType* GetNext(const ItemType* item) { return item->m_NextPool; }
AccessPrevVmaPoolListItemTraits11145     static ItemType*& AccessPrev(ItemType* item) { return item->m_PrevPool; }
AccessNextVmaPoolListItemTraits11146     static ItemType*& AccessNext(ItemType* item) { return item->m_NextPool; }
11147 };
11148 #endif // _VMA_POOL_T
11149 
11150 #ifndef _VMA_CURRENT_BUDGET_DATA
11151 struct VmaCurrentBudgetData
11152 {
11153     VMA_ATOMIC_UINT32 m_BlockCount[VK_MAX_MEMORY_HEAPS];
11154     VMA_ATOMIC_UINT32 m_AllocationCount[VK_MAX_MEMORY_HEAPS];
11155     VMA_ATOMIC_UINT64 m_BlockBytes[VK_MAX_MEMORY_HEAPS];
11156     VMA_ATOMIC_UINT64 m_AllocationBytes[VK_MAX_MEMORY_HEAPS];
11157 
11158 #if VMA_MEMORY_BUDGET
11159     VMA_ATOMIC_UINT32 m_OperationsSinceBudgetFetch;
11160     VMA_RW_MUTEX m_BudgetMutex;
11161     uint64_t m_VulkanUsage[VK_MAX_MEMORY_HEAPS];
11162     uint64_t m_VulkanBudget[VK_MAX_MEMORY_HEAPS];
11163     uint64_t m_BlockBytesAtBudgetFetch[VK_MAX_MEMORY_HEAPS];
11164 #endif // VMA_MEMORY_BUDGET
11165 
11166     VmaCurrentBudgetData();
11167 
11168     void AddAllocation(uint32_t heapIndex, VkDeviceSize allocationSize);
11169     void RemoveAllocation(uint32_t heapIndex, VkDeviceSize allocationSize);
11170 };
11171 
11172 #ifndef _VMA_CURRENT_BUDGET_DATA_FUNCTIONS
VmaCurrentBudgetData()11173 VmaCurrentBudgetData::VmaCurrentBudgetData()
11174 {
11175     for (uint32_t heapIndex = 0; heapIndex < VK_MAX_MEMORY_HEAPS; ++heapIndex)
11176     {
11177         m_BlockCount[heapIndex] = 0;
11178         m_AllocationCount[heapIndex] = 0;
11179         m_BlockBytes[heapIndex] = 0;
11180         m_AllocationBytes[heapIndex] = 0;
11181 #if VMA_MEMORY_BUDGET
11182         m_VulkanUsage[heapIndex] = 0;
11183         m_VulkanBudget[heapIndex] = 0;
11184         m_BlockBytesAtBudgetFetch[heapIndex] = 0;
11185 #endif
11186     }
11187 
11188 #if VMA_MEMORY_BUDGET
11189     m_OperationsSinceBudgetFetch = 0;
11190 #endif
11191 }
11192 
AddAllocation(uint32_t heapIndex,VkDeviceSize allocationSize)11193 void VmaCurrentBudgetData::AddAllocation(uint32_t heapIndex, VkDeviceSize allocationSize)
11194 {
11195     m_AllocationBytes[heapIndex] += allocationSize;
11196     ++m_AllocationCount[heapIndex];
11197 #if VMA_MEMORY_BUDGET
11198     ++m_OperationsSinceBudgetFetch;
11199 #endif
11200 }
11201 
RemoveAllocation(uint32_t heapIndex,VkDeviceSize allocationSize)11202 void VmaCurrentBudgetData::RemoveAllocation(uint32_t heapIndex, VkDeviceSize allocationSize)
11203 {
11204     VMA_ASSERT(m_AllocationBytes[heapIndex] >= allocationSize);
11205     m_AllocationBytes[heapIndex] -= allocationSize;
11206     VMA_ASSERT(m_AllocationCount[heapIndex] > 0);
11207     --m_AllocationCount[heapIndex];
11208 #if VMA_MEMORY_BUDGET
11209     ++m_OperationsSinceBudgetFetch;
11210 #endif
11211 }
11212 #endif // _VMA_CURRENT_BUDGET_DATA_FUNCTIONS
11213 #endif // _VMA_CURRENT_BUDGET_DATA
11214 
11215 #ifndef _VMA_ALLOCATION_OBJECT_ALLOCATOR
11216 /*
11217 Thread-safe wrapper over VmaPoolAllocator free list, for allocation of VmaAllocation_T objects.
11218 */
11219 class VmaAllocationObjectAllocator
11220 {
VMA_CLASS_NO_COPY(VmaAllocationObjectAllocator)11221     VMA_CLASS_NO_COPY(VmaAllocationObjectAllocator)
11222 public:
11223     VmaAllocationObjectAllocator(const VkAllocationCallbacks* pAllocationCallbacks)
11224         : m_Allocator(pAllocationCallbacks, 1024) {}
11225 
11226     template<typename... Types> VmaAllocation Allocate(Types&&... args);
11227     void Free(VmaAllocation hAlloc);
11228 
11229 private:
11230     VMA_MUTEX m_Mutex;
11231     VmaPoolAllocator<VmaAllocation_T> m_Allocator;
11232 };
11233 
11234 template<typename... Types>
Allocate(Types &&...args)11235 VmaAllocation VmaAllocationObjectAllocator::Allocate(Types&&... args)
11236 {
11237     VmaMutexLock mutexLock(m_Mutex);
11238     return m_Allocator.Alloc<Types...>(std::forward<Types>(args)...);
11239 }
11240 
Free(VmaAllocation hAlloc)11241 void VmaAllocationObjectAllocator::Free(VmaAllocation hAlloc)
11242 {
11243     VmaMutexLock mutexLock(m_Mutex);
11244     m_Allocator.Free(hAlloc);
11245 }
11246 #endif // _VMA_ALLOCATION_OBJECT_ALLOCATOR
11247 
11248 #ifndef _VMA_VIRTUAL_BLOCK_T
11249 struct VmaVirtualBlock_T
11250 {
11251     VMA_CLASS_NO_COPY(VmaVirtualBlock_T)
11252 public:
11253     const bool m_AllocationCallbacksSpecified;
11254     const VkAllocationCallbacks m_AllocationCallbacks;
11255 
11256     VmaVirtualBlock_T(const VmaVirtualBlockCreateInfo& createInfo);
11257     ~VmaVirtualBlock_T();
11258 
InitVmaVirtualBlock_T11259     VkResult Init() { return VK_SUCCESS; }
IsEmptyVmaVirtualBlock_T11260     bool IsEmpty() const { return m_Metadata->IsEmpty(); }
FreeVmaVirtualBlock_T11261     void Free(VmaVirtualAllocation allocation) { m_Metadata->Free((VmaAllocHandle)allocation); }
SetAllocationUserDataVmaVirtualBlock_T11262     void SetAllocationUserData(VmaVirtualAllocation allocation, void* userData) { m_Metadata->SetAllocationUserData((VmaAllocHandle)allocation, userData); }
ClearVmaVirtualBlock_T11263     void Clear() { m_Metadata->Clear(); }
11264 
11265     const VkAllocationCallbacks* GetAllocationCallbacks() const;
11266     void GetAllocationInfo(VmaVirtualAllocation allocation, VmaVirtualAllocationInfo& outInfo);
11267     VkResult Allocate(const VmaVirtualAllocationCreateInfo& createInfo, VmaVirtualAllocation& outAllocation,
11268         VkDeviceSize* outOffset);
11269     void GetStatistics(VmaStatistics& outStats) const;
11270     void CalculateDetailedStatistics(VmaDetailedStatistics& outStats) const;
11271 #if VMA_STATS_STRING_ENABLED
11272     void BuildStatsString(bool detailedMap, VmaStringBuilder& sb) const;
11273 #endif
11274 
11275 private:
11276     VmaBlockMetadata* m_Metadata;
11277 };
11278 
11279 #ifndef _VMA_VIRTUAL_BLOCK_T_FUNCTIONS
VmaVirtualBlock_T(const VmaVirtualBlockCreateInfo & createInfo)11280 VmaVirtualBlock_T::VmaVirtualBlock_T(const VmaVirtualBlockCreateInfo& createInfo)
11281     : m_AllocationCallbacksSpecified(createInfo.pAllocationCallbacks != VMA_NULL),
11282     m_AllocationCallbacks(createInfo.pAllocationCallbacks != VMA_NULL ? *createInfo.pAllocationCallbacks : VmaEmptyAllocationCallbacks)
11283 {
11284     const uint32_t algorithm = createInfo.flags & VMA_VIRTUAL_BLOCK_CREATE_ALGORITHM_MASK;
11285     switch (algorithm)
11286     {
11287     default:
11288         VMA_ASSERT(0);
11289     case 0:
11290         m_Metadata = vma_new(GetAllocationCallbacks(), VmaBlockMetadata_TLSF)(VK_NULL_HANDLE, 1, true);
11291         break;
11292     case VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT:
11293         m_Metadata = vma_new(GetAllocationCallbacks(), VmaBlockMetadata_Linear)(VK_NULL_HANDLE, 1, true);
11294         break;
11295     }
11296 
11297     m_Metadata->Init(createInfo.size);
11298 }
11299 
~VmaVirtualBlock_T()11300 VmaVirtualBlock_T::~VmaVirtualBlock_T()
11301 {
11302     // Define macro VMA_DEBUG_LOG to receive the list of the unfreed allocations
11303     if (!m_Metadata->IsEmpty())
11304         m_Metadata->DebugLogAllAllocations();
11305     // This is the most important assert in the entire library.
11306     // Hitting it means you have some memory leak - unreleased virtual allocations.
11307     VMA_ASSERT(m_Metadata->IsEmpty() && "Some virtual allocations were not freed before destruction of this virtual block!");
11308 
11309     vma_delete(GetAllocationCallbacks(), m_Metadata);
11310 }
11311 
GetAllocationCallbacks()11312 const VkAllocationCallbacks* VmaVirtualBlock_T::GetAllocationCallbacks() const
11313 {
11314     return m_AllocationCallbacksSpecified ? &m_AllocationCallbacks : VMA_NULL;
11315 }
11316 
GetAllocationInfo(VmaVirtualAllocation allocation,VmaVirtualAllocationInfo & outInfo)11317 void VmaVirtualBlock_T::GetAllocationInfo(VmaVirtualAllocation allocation, VmaVirtualAllocationInfo& outInfo)
11318 {
11319     m_Metadata->GetAllocationInfo((VmaAllocHandle)allocation, outInfo);
11320 }
11321 
Allocate(const VmaVirtualAllocationCreateInfo & createInfo,VmaVirtualAllocation & outAllocation,VkDeviceSize * outOffset)11322 VkResult VmaVirtualBlock_T::Allocate(const VmaVirtualAllocationCreateInfo& createInfo, VmaVirtualAllocation& outAllocation,
11323     VkDeviceSize* outOffset)
11324 {
11325     VmaAllocationRequest request = {};
11326     if (m_Metadata->CreateAllocationRequest(
11327         createInfo.size, // allocSize
11328         VMA_MAX(createInfo.alignment, (VkDeviceSize)1), // allocAlignment
11329         (createInfo.flags & VMA_VIRTUAL_ALLOCATION_CREATE_UPPER_ADDRESS_BIT) != 0, // upperAddress
11330         VMA_SUBALLOCATION_TYPE_UNKNOWN, // allocType - unimportant
11331         createInfo.flags & VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MASK, // strategy
11332         &request))
11333     {
11334         m_Metadata->Alloc(request,
11335             VMA_SUBALLOCATION_TYPE_UNKNOWN, // type - unimportant
11336             createInfo.pUserData);
11337         outAllocation = (VmaVirtualAllocation)request.allocHandle;
11338         if(outOffset)
11339             *outOffset = m_Metadata->GetAllocationOffset(request.allocHandle);
11340         return VK_SUCCESS;
11341     }
11342     outAllocation = (VmaVirtualAllocation)VK_NULL_HANDLE;
11343     if (outOffset)
11344         *outOffset = UINT64_MAX;
11345     return VK_ERROR_OUT_OF_DEVICE_MEMORY;
11346 }
11347 
GetStatistics(VmaStatistics & outStats)11348 void VmaVirtualBlock_T::GetStatistics(VmaStatistics& outStats) const
11349 {
11350     VmaClearStatistics(outStats);
11351     m_Metadata->AddStatistics(outStats);
11352 }
11353 
CalculateDetailedStatistics(VmaDetailedStatistics & outStats)11354 void VmaVirtualBlock_T::CalculateDetailedStatistics(VmaDetailedStatistics& outStats) const
11355 {
11356     VmaClearDetailedStatistics(outStats);
11357     m_Metadata->AddDetailedStatistics(outStats);
11358 }
11359 
11360 #if VMA_STATS_STRING_ENABLED
BuildStatsString(bool detailedMap,VmaStringBuilder & sb)11361 void VmaVirtualBlock_T::BuildStatsString(bool detailedMap, VmaStringBuilder& sb) const
11362 {
11363     VmaJsonWriter json(GetAllocationCallbacks(), sb);
11364     json.BeginObject();
11365 
11366     VmaDetailedStatistics stats;
11367     CalculateDetailedStatistics(stats);
11368 
11369     json.WriteString("Stats");
11370     VmaPrintDetailedStatistics(json, stats);
11371 
11372     if (detailedMap)
11373     {
11374         json.WriteString("Details");
11375         json.BeginObject();
11376         m_Metadata->PrintDetailedMap(json);
11377         json.EndObject();
11378     }
11379 
11380     json.EndObject();
11381 }
11382 #endif // VMA_STATS_STRING_ENABLED
11383 #endif // _VMA_VIRTUAL_BLOCK_T_FUNCTIONS
11384 #endif // _VMA_VIRTUAL_BLOCK_T
11385 
11386 
11387 // Main allocator object.
11388 struct VmaAllocator_T
11389 {
11390     VMA_CLASS_NO_COPY(VmaAllocator_T)
11391 public:
11392     bool m_UseMutex;
11393     uint32_t m_VulkanApiVersion;
11394     bool m_UseKhrDedicatedAllocation; // Can be set only if m_VulkanApiVersion < VK_MAKE_VERSION(1, 1, 0).
11395     bool m_UseKhrBindMemory2; // Can be set only if m_VulkanApiVersion < VK_MAKE_VERSION(1, 1, 0).
11396     bool m_UseExtMemoryBudget;
11397     bool m_UseAmdDeviceCoherentMemory;
11398     bool m_UseKhrBufferDeviceAddress;
11399     bool m_UseExtMemoryPriority;
11400     VkDevice m_hDevice;
11401     VkInstance m_hInstance;
11402     bool m_AllocationCallbacksSpecified;
11403     VkAllocationCallbacks m_AllocationCallbacks;
11404     VmaDeviceMemoryCallbacks m_DeviceMemoryCallbacks;
11405     VmaAllocationObjectAllocator m_AllocationObjectAllocator;
11406 
11407     // Each bit (1 << i) is set if HeapSizeLimit is enabled for that heap, so cannot allocate more than the heap size.
11408     uint32_t m_HeapSizeLimitMask;
11409 
11410     VkPhysicalDeviceProperties m_PhysicalDeviceProperties;
11411     VkPhysicalDeviceMemoryProperties m_MemProps;
11412 
11413     // Default pools.
11414     VmaBlockVector* m_pBlockVectors[VK_MAX_MEMORY_TYPES];
11415     VmaDedicatedAllocationList m_DedicatedAllocations[VK_MAX_MEMORY_TYPES];
11416 
11417     VmaCurrentBudgetData m_Budget;
11418     VMA_ATOMIC_UINT32 m_DeviceMemoryCount; // Total number of VkDeviceMemory objects.
11419 
11420     VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo);
11421     VkResult Init(const VmaAllocatorCreateInfo* pCreateInfo);
11422     ~VmaAllocator_T();
11423 
GetAllocationCallbacksVmaAllocator_T11424     const VkAllocationCallbacks* GetAllocationCallbacks() const
11425     {
11426         return m_AllocationCallbacksSpecified ? &m_AllocationCallbacks : VMA_NULL;
11427     }
GetVulkanFunctionsVmaAllocator_T11428     const VmaVulkanFunctions& GetVulkanFunctions() const
11429     {
11430         return m_VulkanFunctions;
11431     }
11432 
GetPhysicalDeviceVmaAllocator_T11433     VkPhysicalDevice GetPhysicalDevice() const { return m_PhysicalDevice; }
11434 
GetBufferImageGranularityVmaAllocator_T11435     VkDeviceSize GetBufferImageGranularity() const
11436     {
11437         return VMA_MAX(
11438             static_cast<VkDeviceSize>(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY),
11439             m_PhysicalDeviceProperties.limits.bufferImageGranularity);
11440     }
11441 
GetMemoryHeapCountVmaAllocator_T11442     uint32_t GetMemoryHeapCount() const { return m_MemProps.memoryHeapCount; }
GetMemoryTypeCountVmaAllocator_T11443     uint32_t GetMemoryTypeCount() const { return m_MemProps.memoryTypeCount; }
11444 
MemoryTypeIndexToHeapIndexVmaAllocator_T11445     uint32_t MemoryTypeIndexToHeapIndex(uint32_t memTypeIndex) const
11446     {
11447         VMA_ASSERT(memTypeIndex < m_MemProps.memoryTypeCount);
11448         return m_MemProps.memoryTypes[memTypeIndex].heapIndex;
11449     }
11450     // True when specific memory type is HOST_VISIBLE but not HOST_COHERENT.
IsMemoryTypeNonCoherentVmaAllocator_T11451     bool IsMemoryTypeNonCoherent(uint32_t memTypeIndex) const
11452     {
11453         return (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & (VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)) ==
11454             VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
11455     }
11456     // Minimum alignment for all allocations in specific memory type.
GetMemoryTypeMinAlignmentVmaAllocator_T11457     VkDeviceSize GetMemoryTypeMinAlignment(uint32_t memTypeIndex) const
11458     {
11459         return IsMemoryTypeNonCoherent(memTypeIndex) ?
11460             VMA_MAX((VkDeviceSize)VMA_MIN_ALIGNMENT, m_PhysicalDeviceProperties.limits.nonCoherentAtomSize) :
11461             (VkDeviceSize)VMA_MIN_ALIGNMENT;
11462     }
11463 
IsIntegratedGpuVmaAllocator_T11464     bool IsIntegratedGpu() const
11465     {
11466         return m_PhysicalDeviceProperties.deviceType == VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU;
11467     }
11468 
GetGlobalMemoryTypeBitsVmaAllocator_T11469     uint32_t GetGlobalMemoryTypeBits() const { return m_GlobalMemoryTypeBits; }
11470 
11471     void GetBufferMemoryRequirements(
11472         VkBuffer hBuffer,
11473         VkMemoryRequirements& memReq,
11474         bool& requiresDedicatedAllocation,
11475         bool& prefersDedicatedAllocation) const;
11476     void GetImageMemoryRequirements(
11477         VkImage hImage,
11478         VkMemoryRequirements& memReq,
11479         bool& requiresDedicatedAllocation,
11480         bool& prefersDedicatedAllocation) const;
11481     VkResult FindMemoryTypeIndex(
11482         uint32_t memoryTypeBits,
11483         const VmaAllocationCreateInfo* pAllocationCreateInfo,
11484         VkFlags bufImgUsage, // VkBufferCreateInfo::usage or VkImageCreateInfo::usage. UINT32_MAX if unknown.
11485         uint32_t* pMemoryTypeIndex) const;
11486 
11487     // Main allocation function.
11488     VkResult AllocateMemory(
11489         const VkMemoryRequirements& vkMemReq,
11490         bool requiresDedicatedAllocation,
11491         bool prefersDedicatedAllocation,
11492         VkBuffer dedicatedBuffer,
11493         VkImage dedicatedImage,
11494         VkFlags dedicatedBufferImageUsage, // UINT32_MAX if unknown.
11495         const VmaAllocationCreateInfo& createInfo,
11496         VmaSuballocationType suballocType,
11497         size_t allocationCount,
11498         VmaAllocation* pAllocations);
11499 
11500     // Main deallocation function.
11501     void FreeMemory(
11502         size_t allocationCount,
11503         const VmaAllocation* pAllocations);
11504 
11505     void CalculateStatistics(VmaTotalStatistics* pStats);
11506 
11507     void GetHeapBudgets(
11508         VmaBudget* outBudgets, uint32_t firstHeap, uint32_t heapCount);
11509 
11510 #if VMA_STATS_STRING_ENABLED
11511     void PrintDetailedMap(class VmaJsonWriter& json);
11512 #endif
11513 
11514     void GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo);
11515 
11516     VkResult CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool);
11517     void DestroyPool(VmaPool pool);
11518     void GetPoolStatistics(VmaPool pool, VmaStatistics* pPoolStats);
11519     void CalculatePoolStatistics(VmaPool pool, VmaDetailedStatistics* pPoolStats);
11520 
11521     void SetCurrentFrameIndex(uint32_t frameIndex);
GetCurrentFrameIndexVmaAllocator_T11522     uint32_t GetCurrentFrameIndex() const { return m_CurrentFrameIndex.load(); }
11523 
11524     VkResult CheckPoolCorruption(VmaPool hPool);
11525     VkResult CheckCorruption(uint32_t memoryTypeBits);
11526 
11527     // Call to Vulkan function vkAllocateMemory with accompanying bookkeeping.
11528     VkResult AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory);
11529     // Call to Vulkan function vkFreeMemory with accompanying bookkeeping.
11530     void FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory);
11531     // Call to Vulkan function vkBindBufferMemory or vkBindBufferMemory2KHR.
11532     VkResult BindVulkanBuffer(
11533         VkDeviceMemory memory,
11534         VkDeviceSize memoryOffset,
11535         VkBuffer buffer,
11536         const void* pNext);
11537     // Call to Vulkan function vkBindImageMemory or vkBindImageMemory2KHR.
11538     VkResult BindVulkanImage(
11539         VkDeviceMemory memory,
11540         VkDeviceSize memoryOffset,
11541         VkImage image,
11542         const void* pNext);
11543 
11544     VkResult Map(VmaAllocation hAllocation, void** ppData);
11545     void Unmap(VmaAllocation hAllocation);
11546 
11547     VkResult BindBufferMemory(
11548         VmaAllocation hAllocation,
11549         VkDeviceSize allocationLocalOffset,
11550         VkBuffer hBuffer,
11551         const void* pNext);
11552     VkResult BindImageMemory(
11553         VmaAllocation hAllocation,
11554         VkDeviceSize allocationLocalOffset,
11555         VkImage hImage,
11556         const void* pNext);
11557 
11558     VkResult FlushOrInvalidateAllocation(
11559         VmaAllocation hAllocation,
11560         VkDeviceSize offset, VkDeviceSize size,
11561         VMA_CACHE_OPERATION op);
11562     VkResult FlushOrInvalidateAllocations(
11563         uint32_t allocationCount,
11564         const VmaAllocation* allocations,
11565         const VkDeviceSize* offsets, const VkDeviceSize* sizes,
11566         VMA_CACHE_OPERATION op);
11567 
11568     void FillAllocation(const VmaAllocation hAllocation, uint8_t pattern);
11569 
11570     /*
11571     Returns bit mask of memory types that can support defragmentation on GPU as
11572     they support creation of required buffer for copy operations.
11573     */
11574     uint32_t GetGpuDefragmentationMemoryTypeBits();
11575 
11576 #if VMA_EXTERNAL_MEMORY
GetExternalMemoryHandleTypeFlagsVmaAllocator_T11577     VkExternalMemoryHandleTypeFlagsKHR GetExternalMemoryHandleTypeFlags(uint32_t memTypeIndex) const
11578     {
11579         return m_TypeExternalMemoryHandleTypes[memTypeIndex];
11580     }
11581 #endif // #if VMA_EXTERNAL_MEMORY
11582 
11583 private:
11584     VkDeviceSize m_PreferredLargeHeapBlockSize;
11585 
11586     VkPhysicalDevice m_PhysicalDevice;
11587     VMA_ATOMIC_UINT32 m_CurrentFrameIndex;
11588     VMA_ATOMIC_UINT32 m_GpuDefragmentationMemoryTypeBits; // UINT32_MAX means uninitialized.
11589 #if VMA_EXTERNAL_MEMORY
11590     VkExternalMemoryHandleTypeFlagsKHR m_TypeExternalMemoryHandleTypes[VK_MAX_MEMORY_TYPES];
11591 #endif // #if VMA_EXTERNAL_MEMORY
11592 
11593     VMA_RW_MUTEX m_PoolsMutex;
11594     typedef VmaIntrusiveLinkedList<VmaPoolListItemTraits> PoolList;
11595     // Protected by m_PoolsMutex.
11596     PoolList m_Pools;
11597     uint32_t m_NextPoolId;
11598 
11599     VmaVulkanFunctions m_VulkanFunctions;
11600 
11601     // Global bit mask AND-ed with any memoryTypeBits to disallow certain memory types.
11602     uint32_t m_GlobalMemoryTypeBits;
11603 
11604     void ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions);
11605 
11606 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
11607     void ImportVulkanFunctions_Static();
11608 #endif
11609 
11610     void ImportVulkanFunctions_Custom(const VmaVulkanFunctions* pVulkanFunctions);
11611 
11612 #if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
11613     void ImportVulkanFunctions_Dynamic();
11614 #endif
11615 
11616     void ValidateVulkanFunctions();
11617 
11618     VkDeviceSize CalcPreferredBlockSize(uint32_t memTypeIndex);
11619 
11620     VkResult AllocateMemoryOfType(
11621         VmaPool pool,
11622         VkDeviceSize size,
11623         VkDeviceSize alignment,
11624         bool dedicatedPreferred,
11625         VkBuffer dedicatedBuffer,
11626         VkImage dedicatedImage,
11627         VkFlags dedicatedBufferImageUsage,
11628         const VmaAllocationCreateInfo& createInfo,
11629         uint32_t memTypeIndex,
11630         VmaSuballocationType suballocType,
11631         VmaDedicatedAllocationList& dedicatedAllocations,
11632         VmaBlockVector& blockVector,
11633         size_t allocationCount,
11634         VmaAllocation* pAllocations);
11635 
11636     // Helper function only to be used inside AllocateDedicatedMemory.
11637     VkResult AllocateDedicatedMemoryPage(
11638         VmaPool pool,
11639         VkDeviceSize size,
11640         VmaSuballocationType suballocType,
11641         uint32_t memTypeIndex,
11642         const VkMemoryAllocateInfo& allocInfo,
11643         bool map,
11644         bool isUserDataString,
11645         bool isMappingAllowed,
11646         void* pUserData,
11647         VmaAllocation* pAllocation);
11648 
11649     // Allocates and registers new VkDeviceMemory specifically for dedicated allocations.
11650     VkResult AllocateDedicatedMemory(
11651         VmaPool pool,
11652         VkDeviceSize size,
11653         VmaSuballocationType suballocType,
11654         VmaDedicatedAllocationList& dedicatedAllocations,
11655         uint32_t memTypeIndex,
11656         bool map,
11657         bool isUserDataString,
11658         bool isMappingAllowed,
11659         bool canAliasMemory,
11660         void* pUserData,
11661         float priority,
11662         VkBuffer dedicatedBuffer,
11663         VkImage dedicatedImage,
11664         VkFlags dedicatedBufferImageUsage,
11665         size_t allocationCount,
11666         VmaAllocation* pAllocations,
11667         const void* pNextChain = nullptr);
11668 
11669     void FreeDedicatedMemory(const VmaAllocation allocation);
11670 
11671     VkResult CalcMemTypeParams(
11672         VmaAllocationCreateInfo& outCreateInfo,
11673         uint32_t memTypeIndex,
11674         VkDeviceSize size,
11675         size_t allocationCount);
11676     VkResult CalcAllocationParams(
11677         VmaAllocationCreateInfo& outCreateInfo,
11678         bool dedicatedRequired,
11679         bool dedicatedPreferred);
11680 
11681     /*
11682     Calculates and returns bit mask of memory types that can support defragmentation
11683     on GPU as they support creation of required buffer for copy operations.
11684     */
11685     uint32_t CalculateGpuDefragmentationMemoryTypeBits() const;
11686     uint32_t CalculateGlobalMemoryTypeBits() const;
11687 
11688     bool GetFlushOrInvalidateRange(
11689         VmaAllocation allocation,
11690         VkDeviceSize offset, VkDeviceSize size,
11691         VkMappedMemoryRange& outRange) const;
11692 
11693 #if VMA_MEMORY_BUDGET
11694     void UpdateVulkanBudget();
11695 #endif // #if VMA_MEMORY_BUDGET
11696 };
11697 
11698 
11699 #ifndef _VMA_MEMORY_FUNCTIONS
VmaMalloc(VmaAllocator hAllocator,size_t size,size_t alignment)11700 static void* VmaMalloc(VmaAllocator hAllocator, size_t size, size_t alignment)
11701 {
11702     return VmaMalloc(&hAllocator->m_AllocationCallbacks, size, alignment);
11703 }
11704 
VmaFree(VmaAllocator hAllocator,void * ptr)11705 static void VmaFree(VmaAllocator hAllocator, void* ptr)
11706 {
11707     VmaFree(&hAllocator->m_AllocationCallbacks, ptr);
11708 }
11709 
11710 template<typename T>
VmaAllocate(VmaAllocator hAllocator)11711 static T* VmaAllocate(VmaAllocator hAllocator)
11712 {
11713     return (T*)VmaMalloc(hAllocator, sizeof(T), VMA_ALIGN_OF(T));
11714 }
11715 
11716 template<typename T>
VmaAllocateArray(VmaAllocator hAllocator,size_t count)11717 static T* VmaAllocateArray(VmaAllocator hAllocator, size_t count)
11718 {
11719     return (T*)VmaMalloc(hAllocator, sizeof(T) * count, VMA_ALIGN_OF(T));
11720 }
11721 
11722 template<typename T>
vma_delete(VmaAllocator hAllocator,T * ptr)11723 static void vma_delete(VmaAllocator hAllocator, T* ptr)
11724 {
11725     if(ptr != VMA_NULL)
11726     {
11727         ptr->~T();
11728         VmaFree(hAllocator, ptr);
11729     }
11730 }
11731 
11732 template<typename T>
vma_delete_array(VmaAllocator hAllocator,T * ptr,size_t count)11733 static void vma_delete_array(VmaAllocator hAllocator, T* ptr, size_t count)
11734 {
11735     if(ptr != VMA_NULL)
11736     {
11737         for(size_t i = count; i--; )
11738             ptr[i].~T();
11739         VmaFree(hAllocator, ptr);
11740     }
11741 }
11742 #endif // _VMA_MEMORY_FUNCTIONS
11743 
11744 #ifndef _VMA_DEVICE_MEMORY_BLOCK_FUNCTIONS
VmaDeviceMemoryBlock(VmaAllocator hAllocator)11745 VmaDeviceMemoryBlock::VmaDeviceMemoryBlock(VmaAllocator hAllocator)
11746     : m_pMetadata(VMA_NULL),
11747     m_MemoryTypeIndex(UINT32_MAX),
11748     m_Id(0),
11749     m_hMemory(VK_NULL_HANDLE),
11750     m_MapCount(0),
11751     m_pMappedData(VMA_NULL) {}
11752 
~VmaDeviceMemoryBlock()11753 VmaDeviceMemoryBlock::~VmaDeviceMemoryBlock()
11754 {
11755     VMA_ASSERT(m_MapCount == 0 && "VkDeviceMemory block is being destroyed while it is still mapped.");
11756     VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
11757 }
11758 
Init(VmaAllocator hAllocator,VmaPool hParentPool,uint32_t newMemoryTypeIndex,VkDeviceMemory newMemory,VkDeviceSize newSize,uint32_t id,uint32_t algorithm,VkDeviceSize bufferImageGranularity)11759 void VmaDeviceMemoryBlock::Init(
11760     VmaAllocator hAllocator,
11761     VmaPool hParentPool,
11762     uint32_t newMemoryTypeIndex,
11763     VkDeviceMemory newMemory,
11764     VkDeviceSize newSize,
11765     uint32_t id,
11766     uint32_t algorithm,
11767     VkDeviceSize bufferImageGranularity)
11768 {
11769     VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
11770 
11771     m_hParentPool = hParentPool;
11772     m_MemoryTypeIndex = newMemoryTypeIndex;
11773     m_Id = id;
11774     m_hMemory = newMemory;
11775 
11776     switch (algorithm)
11777     {
11778     case VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT:
11779         m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_Linear)(hAllocator->GetAllocationCallbacks(),
11780             bufferImageGranularity, false); // isVirtual
11781         break;
11782     default:
11783         VMA_ASSERT(0);
11784         // Fall-through.
11785     case 0:
11786         m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_TLSF)(hAllocator->GetAllocationCallbacks(),
11787             bufferImageGranularity, false); // isVirtual
11788     }
11789     m_pMetadata->Init(newSize);
11790 }
11791 
Destroy(VmaAllocator allocator)11792 void VmaDeviceMemoryBlock::Destroy(VmaAllocator allocator)
11793 {
11794     // Define macro VMA_DEBUG_LOG to receive the list of the unfreed allocations
11795     if (!m_pMetadata->IsEmpty())
11796         m_pMetadata->DebugLogAllAllocations();
11797     // This is the most important assert in the entire library.
11798     // Hitting it means you have some memory leak - unreleased VmaAllocation objects.
11799     VMA_ASSERT(m_pMetadata->IsEmpty() && "Some allocations were not freed before destruction of this memory block!");
11800 
11801     VMA_ASSERT(m_hMemory != VK_NULL_HANDLE);
11802     allocator->FreeVulkanMemory(m_MemoryTypeIndex, m_pMetadata->GetSize(), m_hMemory);
11803     m_hMemory = VK_NULL_HANDLE;
11804 
11805     vma_delete(allocator, m_pMetadata);
11806     m_pMetadata = VMA_NULL;
11807 }
11808 
PostFree(VmaAllocator hAllocator)11809 void VmaDeviceMemoryBlock::PostFree(VmaAllocator hAllocator)
11810 {
11811     if(m_MappingHysteresis.PostFree())
11812     {
11813         VMA_ASSERT(m_MappingHysteresis.GetExtraMapping() == 0);
11814         if (m_MapCount == 0)
11815         {
11816             m_pMappedData = VMA_NULL;
11817             (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, m_hMemory);
11818         }
11819     }
11820 }
11821 
Validate()11822 bool VmaDeviceMemoryBlock::Validate() const
11823 {
11824     VMA_VALIDATE((m_hMemory != VK_NULL_HANDLE) &&
11825         (m_pMetadata->GetSize() != 0));
11826 
11827     return m_pMetadata->Validate();
11828 }
11829 
CheckCorruption(VmaAllocator hAllocator)11830 VkResult VmaDeviceMemoryBlock::CheckCorruption(VmaAllocator hAllocator)
11831 {
11832     void* pData = nullptr;
11833     VkResult res = Map(hAllocator, 1, &pData);
11834     if (res != VK_SUCCESS)
11835     {
11836         return res;
11837     }
11838 
11839     res = m_pMetadata->CheckCorruption(pData);
11840 
11841     Unmap(hAllocator, 1);
11842 
11843     return res;
11844 }
11845 
Map(VmaAllocator hAllocator,uint32_t count,void ** ppData)11846 VkResult VmaDeviceMemoryBlock::Map(VmaAllocator hAllocator, uint32_t count, void** ppData)
11847 {
11848     if (count == 0)
11849     {
11850         return VK_SUCCESS;
11851     }
11852 
11853     VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
11854     const uint32_t oldTotalMapCount = m_MapCount + m_MappingHysteresis.GetExtraMapping();
11855     m_MappingHysteresis.PostMap();
11856     if (oldTotalMapCount != 0)
11857     {
11858         m_MapCount += count;
11859         VMA_ASSERT(m_pMappedData != VMA_NULL);
11860         if (ppData != VMA_NULL)
11861         {
11862             *ppData = m_pMappedData;
11863         }
11864         return VK_SUCCESS;
11865     }
11866     else
11867     {
11868         VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
11869             hAllocator->m_hDevice,
11870             m_hMemory,
11871             0, // offset
11872             VK_WHOLE_SIZE,
11873             0, // flags
11874             &m_pMappedData);
11875         if (result == VK_SUCCESS)
11876         {
11877             if (ppData != VMA_NULL)
11878             {
11879                 *ppData = m_pMappedData;
11880             }
11881             m_MapCount = count;
11882         }
11883         return result;
11884     }
11885 }
11886 
Unmap(VmaAllocator hAllocator,uint32_t count)11887 void VmaDeviceMemoryBlock::Unmap(VmaAllocator hAllocator, uint32_t count)
11888 {
11889     if (count == 0)
11890     {
11891         return;
11892     }
11893 
11894     VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
11895     if (m_MapCount >= count)
11896     {
11897         m_MapCount -= count;
11898         const uint32_t totalMapCount = m_MapCount + m_MappingHysteresis.GetExtraMapping();
11899         if (totalMapCount == 0)
11900         {
11901             m_pMappedData = VMA_NULL;
11902             (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, m_hMemory);
11903         }
11904         m_MappingHysteresis.PostUnmap();
11905     }
11906     else
11907     {
11908         VMA_ASSERT(0 && "VkDeviceMemory block is being unmapped while it was not previously mapped.");
11909     }
11910 }
11911 
WriteMagicValueAfterAllocation(VmaAllocator hAllocator,VkDeviceSize allocOffset,VkDeviceSize allocSize)11912 VkResult VmaDeviceMemoryBlock::WriteMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
11913 {
11914     VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
11915 
11916     void* pData;
11917     VkResult res = Map(hAllocator, 1, &pData);
11918     if (res != VK_SUCCESS)
11919     {
11920         return res;
11921     }
11922 
11923     VmaWriteMagicValue(pData, allocOffset + allocSize);
11924 
11925     Unmap(hAllocator, 1);
11926     return VK_SUCCESS;
11927 }
11928 
ValidateMagicValueAfterAllocation(VmaAllocator hAllocator,VkDeviceSize allocOffset,VkDeviceSize allocSize)11929 VkResult VmaDeviceMemoryBlock::ValidateMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
11930 {
11931     VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
11932 
11933     void* pData;
11934     VkResult res = Map(hAllocator, 1, &pData);
11935     if (res != VK_SUCCESS)
11936     {
11937         return res;
11938     }
11939 
11940     if (!VmaValidateMagicValue(pData, allocOffset + allocSize))
11941     {
11942         VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER FREED ALLOCATION!");
11943     }
11944 
11945     Unmap(hAllocator, 1);
11946     return VK_SUCCESS;
11947 }
11948 
BindBufferMemory(const VmaAllocator hAllocator,const VmaAllocation hAllocation,VkDeviceSize allocationLocalOffset,VkBuffer hBuffer,const void * pNext)11949 VkResult VmaDeviceMemoryBlock::BindBufferMemory(
11950     const VmaAllocator hAllocator,
11951     const VmaAllocation hAllocation,
11952     VkDeviceSize allocationLocalOffset,
11953     VkBuffer hBuffer,
11954     const void* pNext)
11955 {
11956     VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
11957         hAllocation->GetBlock() == this);
11958     VMA_ASSERT(allocationLocalOffset < hAllocation->GetSize() &&
11959         "Invalid allocationLocalOffset. Did you forget that this offset is relative to the beginning of the allocation, not the whole memory block?");
11960     const VkDeviceSize memoryOffset = hAllocation->GetOffset() + allocationLocalOffset;
11961     // This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
11962     VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
11963     return hAllocator->BindVulkanBuffer(m_hMemory, memoryOffset, hBuffer, pNext);
11964 }
11965 
BindImageMemory(const VmaAllocator hAllocator,const VmaAllocation hAllocation,VkDeviceSize allocationLocalOffset,VkImage hImage,const void * pNext)11966 VkResult VmaDeviceMemoryBlock::BindImageMemory(
11967     const VmaAllocator hAllocator,
11968     const VmaAllocation hAllocation,
11969     VkDeviceSize allocationLocalOffset,
11970     VkImage hImage,
11971     const void* pNext)
11972 {
11973     VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
11974         hAllocation->GetBlock() == this);
11975     VMA_ASSERT(allocationLocalOffset < hAllocation->GetSize() &&
11976         "Invalid allocationLocalOffset. Did you forget that this offset is relative to the beginning of the allocation, not the whole memory block?");
11977     const VkDeviceSize memoryOffset = hAllocation->GetOffset() + allocationLocalOffset;
11978     // This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
11979     VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
11980     return hAllocator->BindVulkanImage(m_hMemory, memoryOffset, hImage, pNext);
11981 }
11982 #endif // _VMA_DEVICE_MEMORY_BLOCK_FUNCTIONS
11983 
11984 #ifndef _VMA_ALLOCATION_T_FUNCTIONS
VmaAllocation_T(bool mappingAllowed)11985 VmaAllocation_T::VmaAllocation_T(bool mappingAllowed)
11986     : m_Alignment{ 1 },
11987     m_Size{ 0 },
11988     m_pUserData{ VMA_NULL },
11989     m_pName{ VMA_NULL },
11990     m_MemoryTypeIndex{ 0 },
11991     m_Type{ (uint8_t)ALLOCATION_TYPE_NONE },
11992     m_SuballocationType{ (uint8_t)VMA_SUBALLOCATION_TYPE_UNKNOWN },
11993     m_MapCount{ 0 },
11994     m_Flags{ 0 }
11995 {
11996     if(mappingAllowed)
11997         m_Flags |= (uint8_t)FLAG_MAPPING_ALLOWED;
11998 
11999 #if VMA_STATS_STRING_ENABLED
12000     m_BufferImageUsage = 0;
12001 #endif
12002 }
12003 
~VmaAllocation_T()12004 VmaAllocation_T::~VmaAllocation_T()
12005 {
12006     VMA_ASSERT(m_MapCount == 0 && "Allocation was not unmapped before destruction.");
12007 
12008     // Check if owned string was freed.
12009     VMA_ASSERT(m_pName == VMA_NULL);
12010 }
12011 
InitBlockAllocation(VmaDeviceMemoryBlock * block,VmaAllocHandle allocHandle,VkDeviceSize alignment,VkDeviceSize size,uint32_t memoryTypeIndex,VmaSuballocationType suballocationType,bool mapped)12012 void VmaAllocation_T::InitBlockAllocation(
12013     VmaDeviceMemoryBlock* block,
12014     VmaAllocHandle allocHandle,
12015     VkDeviceSize alignment,
12016     VkDeviceSize size,
12017     uint32_t memoryTypeIndex,
12018     VmaSuballocationType suballocationType,
12019     bool mapped)
12020 {
12021     VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
12022     VMA_ASSERT(block != VMA_NULL);
12023     m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
12024     m_Alignment = alignment;
12025     m_Size = size;
12026     m_MemoryTypeIndex = memoryTypeIndex;
12027     if(mapped)
12028     {
12029         VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
12030         m_Flags |= (uint8_t)FLAG_PERSISTENT_MAP;
12031     }
12032     m_SuballocationType = (uint8_t)suballocationType;
12033     m_BlockAllocation.m_Block = block;
12034     m_BlockAllocation.m_AllocHandle = allocHandle;
12035 }
12036 
InitDedicatedAllocation(VmaPool hParentPool,uint32_t memoryTypeIndex,VkDeviceMemory hMemory,VmaSuballocationType suballocationType,void * pMappedData,VkDeviceSize size)12037 void VmaAllocation_T::InitDedicatedAllocation(
12038     VmaPool hParentPool,
12039     uint32_t memoryTypeIndex,
12040     VkDeviceMemory hMemory,
12041     VmaSuballocationType suballocationType,
12042     void* pMappedData,
12043     VkDeviceSize size)
12044 {
12045     VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
12046     VMA_ASSERT(hMemory != VK_NULL_HANDLE);
12047     m_Type = (uint8_t)ALLOCATION_TYPE_DEDICATED;
12048     m_Alignment = 0;
12049     m_Size = size;
12050     m_MemoryTypeIndex = memoryTypeIndex;
12051     m_SuballocationType = (uint8_t)suballocationType;
12052     if(pMappedData != VMA_NULL)
12053     {
12054         VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
12055         m_Flags |= (uint8_t)FLAG_PERSISTENT_MAP;
12056     }
12057     m_DedicatedAllocation.m_hParentPool = hParentPool;
12058     m_DedicatedAllocation.m_hMemory = hMemory;
12059     m_DedicatedAllocation.m_pMappedData = pMappedData;
12060     m_DedicatedAllocation.m_Prev = VMA_NULL;
12061     m_DedicatedAllocation.m_Next = VMA_NULL;
12062 }
12063 
SetName(VmaAllocator hAllocator,const char * pName)12064 void VmaAllocation_T::SetName(VmaAllocator hAllocator, const char* pName)
12065 {
12066     VMA_ASSERT(pName == VMA_NULL || pName != m_pName);
12067 
12068     FreeName(hAllocator);
12069 
12070     if (pName != VMA_NULL)
12071         m_pName = VmaCreateStringCopy(hAllocator->GetAllocationCallbacks(), pName);
12072 }
12073 
SwapBlockAllocation(VmaAllocator hAllocator,VmaAllocation allocation)12074 uint8_t VmaAllocation_T::SwapBlockAllocation(VmaAllocator hAllocator, VmaAllocation allocation)
12075 {
12076     VMA_ASSERT(allocation != VMA_NULL);
12077     VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
12078     VMA_ASSERT(allocation->m_Type == ALLOCATION_TYPE_BLOCK);
12079 
12080     if (m_MapCount != 0)
12081         m_BlockAllocation.m_Block->Unmap(hAllocator, m_MapCount);
12082 
12083     m_BlockAllocation.m_Block->m_pMetadata->SetAllocationUserData(m_BlockAllocation.m_AllocHandle, allocation);
12084     VMA_SWAP(m_BlockAllocation, allocation->m_BlockAllocation);
12085     m_BlockAllocation.m_Block->m_pMetadata->SetAllocationUserData(m_BlockAllocation.m_AllocHandle, this);
12086 
12087 #if VMA_STATS_STRING_ENABLED
12088     VMA_SWAP(m_BufferImageUsage, allocation->m_BufferImageUsage);
12089 #endif
12090     return m_MapCount;
12091 }
12092 
GetAllocHandle()12093 VmaAllocHandle VmaAllocation_T::GetAllocHandle() const
12094 {
12095     switch (m_Type)
12096     {
12097     case ALLOCATION_TYPE_BLOCK:
12098         return m_BlockAllocation.m_AllocHandle;
12099     case ALLOCATION_TYPE_DEDICATED:
12100         return VK_NULL_HANDLE;
12101     default:
12102         VMA_ASSERT(0);
12103         return VK_NULL_HANDLE;
12104     }
12105 }
12106 
GetOffset()12107 VkDeviceSize VmaAllocation_T::GetOffset() const
12108 {
12109     switch (m_Type)
12110     {
12111     case ALLOCATION_TYPE_BLOCK:
12112         return m_BlockAllocation.m_Block->m_pMetadata->GetAllocationOffset(m_BlockAllocation.m_AllocHandle);
12113     case ALLOCATION_TYPE_DEDICATED:
12114         return 0;
12115     default:
12116         VMA_ASSERT(0);
12117         return 0;
12118     }
12119 }
12120 
GetParentPool()12121 VmaPool VmaAllocation_T::GetParentPool() const
12122 {
12123     switch (m_Type)
12124     {
12125     case ALLOCATION_TYPE_BLOCK:
12126         return m_BlockAllocation.m_Block->GetParentPool();
12127     case ALLOCATION_TYPE_DEDICATED:
12128         return m_DedicatedAllocation.m_hParentPool;
12129     default:
12130         VMA_ASSERT(0);
12131         return VK_NULL_HANDLE;
12132     }
12133 }
12134 
GetMemory()12135 VkDeviceMemory VmaAllocation_T::GetMemory() const
12136 {
12137     switch (m_Type)
12138     {
12139     case ALLOCATION_TYPE_BLOCK:
12140         return m_BlockAllocation.m_Block->GetDeviceMemory();
12141     case ALLOCATION_TYPE_DEDICATED:
12142         return m_DedicatedAllocation.m_hMemory;
12143     default:
12144         VMA_ASSERT(0);
12145         return VK_NULL_HANDLE;
12146     }
12147 }
12148 
GetMappedData()12149 void* VmaAllocation_T::GetMappedData() const
12150 {
12151     switch (m_Type)
12152     {
12153     case ALLOCATION_TYPE_BLOCK:
12154         if (m_MapCount != 0 || IsPersistentMap())
12155         {
12156             void* pBlockData = m_BlockAllocation.m_Block->GetMappedData();
12157             VMA_ASSERT(pBlockData != VMA_NULL);
12158             return (char*)pBlockData + GetOffset();
12159         }
12160         else
12161         {
12162             return VMA_NULL;
12163         }
12164         break;
12165     case ALLOCATION_TYPE_DEDICATED:
12166         VMA_ASSERT((m_DedicatedAllocation.m_pMappedData != VMA_NULL) == (m_MapCount != 0 || IsPersistentMap()));
12167         return m_DedicatedAllocation.m_pMappedData;
12168     default:
12169         VMA_ASSERT(0);
12170         return VMA_NULL;
12171     }
12172 }
12173 
BlockAllocMap()12174 void VmaAllocation_T::BlockAllocMap()
12175 {
12176     VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
12177     VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
12178 
12179     if (m_MapCount < 0xFF)
12180     {
12181         ++m_MapCount;
12182     }
12183     else
12184     {
12185         VMA_ASSERT(0 && "Allocation mapped too many times simultaneously.");
12186     }
12187 }
12188 
BlockAllocUnmap()12189 void VmaAllocation_T::BlockAllocUnmap()
12190 {
12191     VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
12192 
12193     if (m_MapCount > 0)
12194     {
12195         --m_MapCount;
12196     }
12197     else
12198     {
12199         VMA_ASSERT(0 && "Unmapping allocation not previously mapped.");
12200     }
12201 }
12202 
DedicatedAllocMap(VmaAllocator hAllocator,void ** ppData)12203 VkResult VmaAllocation_T::DedicatedAllocMap(VmaAllocator hAllocator, void** ppData)
12204 {
12205     VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
12206     VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
12207 
12208     if (m_MapCount != 0 || IsPersistentMap())
12209     {
12210         if (m_MapCount < 0xFF)
12211         {
12212             VMA_ASSERT(m_DedicatedAllocation.m_pMappedData != VMA_NULL);
12213             *ppData = m_DedicatedAllocation.m_pMappedData;
12214             ++m_MapCount;
12215             return VK_SUCCESS;
12216         }
12217         else
12218         {
12219             VMA_ASSERT(0 && "Dedicated allocation mapped too many times simultaneously.");
12220             return VK_ERROR_MEMORY_MAP_FAILED;
12221         }
12222     }
12223     else
12224     {
12225         VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
12226             hAllocator->m_hDevice,
12227             m_DedicatedAllocation.m_hMemory,
12228             0, // offset
12229             VK_WHOLE_SIZE,
12230             0, // flags
12231             ppData);
12232         if (result == VK_SUCCESS)
12233         {
12234             m_DedicatedAllocation.m_pMappedData = *ppData;
12235             m_MapCount = 1;
12236         }
12237         return result;
12238     }
12239 }
12240 
DedicatedAllocUnmap(VmaAllocator hAllocator)12241 void VmaAllocation_T::DedicatedAllocUnmap(VmaAllocator hAllocator)
12242 {
12243     VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
12244 
12245     if (m_MapCount > 0)
12246     {
12247         --m_MapCount;
12248         if (m_MapCount == 0 && !IsPersistentMap())
12249         {
12250             m_DedicatedAllocation.m_pMappedData = VMA_NULL;
12251             (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(
12252                 hAllocator->m_hDevice,
12253                 m_DedicatedAllocation.m_hMemory);
12254         }
12255     }
12256     else
12257     {
12258         VMA_ASSERT(0 && "Unmapping dedicated allocation not previously mapped.");
12259     }
12260 }
12261 
12262 #if VMA_STATS_STRING_ENABLED
InitBufferImageUsage(uint32_t bufferImageUsage)12263 void VmaAllocation_T::InitBufferImageUsage(uint32_t bufferImageUsage)
12264 {
12265     VMA_ASSERT(m_BufferImageUsage == 0);
12266     m_BufferImageUsage = bufferImageUsage;
12267 }
12268 
PrintParameters(class VmaJsonWriter & json)12269 void VmaAllocation_T::PrintParameters(class VmaJsonWriter& json) const
12270 {
12271     json.WriteString("Type");
12272     json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[m_SuballocationType]);
12273 
12274     json.WriteString("Size");
12275     json.WriteNumber(m_Size);
12276     json.WriteString("Usage");
12277     json.WriteNumber(m_BufferImageUsage);
12278 
12279     if (m_pUserData != VMA_NULL)
12280     {
12281         json.WriteString("CustomData");
12282         json.BeginString();
12283         json.ContinueString_Pointer(m_pUserData);
12284         json.EndString();
12285     }
12286     if (m_pName != VMA_NULL)
12287     {
12288         json.WriteString("Name");
12289         json.WriteString(m_pName);
12290     }
12291 }
12292 #endif // VMA_STATS_STRING_ENABLED
12293 
FreeName(VmaAllocator hAllocator)12294 void VmaAllocation_T::FreeName(VmaAllocator hAllocator)
12295 {
12296     if(m_pName)
12297     {
12298         VmaFreeString(hAllocator->GetAllocationCallbacks(), m_pName);
12299         m_pName = VMA_NULL;
12300     }
12301 }
12302 #endif // _VMA_ALLOCATION_T_FUNCTIONS
12303 
12304 #ifndef _VMA_BLOCK_VECTOR_FUNCTIONS
VmaBlockVector(VmaAllocator hAllocator,VmaPool hParentPool,uint32_t memoryTypeIndex,VkDeviceSize preferredBlockSize,size_t minBlockCount,size_t maxBlockCount,VkDeviceSize bufferImageGranularity,bool explicitBlockSize,uint32_t algorithm,float priority,VkDeviceSize minAllocationAlignment,void * pMemoryAllocateNext)12305 VmaBlockVector::VmaBlockVector(
12306     VmaAllocator hAllocator,
12307     VmaPool hParentPool,
12308     uint32_t memoryTypeIndex,
12309     VkDeviceSize preferredBlockSize,
12310     size_t minBlockCount,
12311     size_t maxBlockCount,
12312     VkDeviceSize bufferImageGranularity,
12313     bool explicitBlockSize,
12314     uint32_t algorithm,
12315     float priority,
12316     VkDeviceSize minAllocationAlignment,
12317     void* pMemoryAllocateNext)
12318     : m_hAllocator(hAllocator),
12319     m_hParentPool(hParentPool),
12320     m_MemoryTypeIndex(memoryTypeIndex),
12321     m_PreferredBlockSize(preferredBlockSize),
12322     m_MinBlockCount(minBlockCount),
12323     m_MaxBlockCount(maxBlockCount),
12324     m_BufferImageGranularity(bufferImageGranularity),
12325     m_ExplicitBlockSize(explicitBlockSize),
12326     m_Algorithm(algorithm),
12327     m_Priority(priority),
12328     m_MinAllocationAlignment(minAllocationAlignment),
12329     m_pMemoryAllocateNext(pMemoryAllocateNext),
12330     m_Blocks(VmaStlAllocator<VmaDeviceMemoryBlock*>(hAllocator->GetAllocationCallbacks())),
12331     m_NextBlockId(0) {}
12332 
~VmaBlockVector()12333 VmaBlockVector::~VmaBlockVector()
12334 {
12335     for (size_t i = m_Blocks.size(); i--; )
12336     {
12337         m_Blocks[i]->Destroy(m_hAllocator);
12338         vma_delete(m_hAllocator, m_Blocks[i]);
12339     }
12340 }
12341 
CreateMinBlocks()12342 VkResult VmaBlockVector::CreateMinBlocks()
12343 {
12344     for (size_t i = 0; i < m_MinBlockCount; ++i)
12345     {
12346         VkResult res = CreateBlock(m_PreferredBlockSize, VMA_NULL);
12347         if (res != VK_SUCCESS)
12348         {
12349             return res;
12350         }
12351     }
12352     return VK_SUCCESS;
12353 }
12354 
AddStatistics(VmaStatistics & inoutStats)12355 void VmaBlockVector::AddStatistics(VmaStatistics& inoutStats)
12356 {
12357     VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
12358 
12359     const size_t blockCount = m_Blocks.size();
12360     for (uint32_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
12361     {
12362         const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
12363         VMA_ASSERT(pBlock);
12364         VMA_HEAVY_ASSERT(pBlock->Validate());
12365         pBlock->m_pMetadata->AddStatistics(inoutStats);
12366     }
12367 }
12368 
AddDetailedStatistics(VmaDetailedStatistics & inoutStats)12369 void VmaBlockVector::AddDetailedStatistics(VmaDetailedStatistics& inoutStats)
12370 {
12371     VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
12372 
12373     const size_t blockCount = m_Blocks.size();
12374     for (uint32_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
12375     {
12376         const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
12377         VMA_ASSERT(pBlock);
12378         VMA_HEAVY_ASSERT(pBlock->Validate());
12379         pBlock->m_pMetadata->AddDetailedStatistics(inoutStats);
12380     }
12381 }
12382 
IsEmpty()12383 bool VmaBlockVector::IsEmpty()
12384 {
12385     VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
12386     return m_Blocks.empty();
12387 }
12388 
IsCorruptionDetectionEnabled()12389 bool VmaBlockVector::IsCorruptionDetectionEnabled() const
12390 {
12391     const uint32_t requiredMemFlags = VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
12392     return (VMA_DEBUG_DETECT_CORRUPTION != 0) &&
12393         (VMA_DEBUG_MARGIN > 0) &&
12394         (m_Algorithm == 0 || m_Algorithm == VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT) &&
12395         (m_hAllocator->m_MemProps.memoryTypes[m_MemoryTypeIndex].propertyFlags & requiredMemFlags) == requiredMemFlags;
12396 }
12397 
Allocate(VkDeviceSize size,VkDeviceSize alignment,const VmaAllocationCreateInfo & createInfo,VmaSuballocationType suballocType,size_t allocationCount,VmaAllocation * pAllocations)12398 VkResult VmaBlockVector::Allocate(
12399     VkDeviceSize size,
12400     VkDeviceSize alignment,
12401     const VmaAllocationCreateInfo& createInfo,
12402     VmaSuballocationType suballocType,
12403     size_t allocationCount,
12404     VmaAllocation* pAllocations)
12405 {
12406     size_t allocIndex;
12407     VkResult res = VK_SUCCESS;
12408 
12409     alignment = VMA_MAX(alignment, m_MinAllocationAlignment);
12410 
12411     if (IsCorruptionDetectionEnabled())
12412     {
12413         size = VmaAlignUp<VkDeviceSize>(size, sizeof(VMA_CORRUPTION_DETECTION_MAGIC_VALUE));
12414         alignment = VmaAlignUp<VkDeviceSize>(alignment, sizeof(VMA_CORRUPTION_DETECTION_MAGIC_VALUE));
12415     }
12416 
12417     {
12418         VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
12419         for (allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
12420         {
12421             res = AllocatePage(
12422                 size,
12423                 alignment,
12424                 createInfo,
12425                 suballocType,
12426                 pAllocations + allocIndex);
12427             if (res != VK_SUCCESS)
12428             {
12429                 break;
12430             }
12431         }
12432     }
12433 
12434     if (res != VK_SUCCESS)
12435     {
12436         // Free all already created allocations.
12437         while (allocIndex--)
12438             Free(pAllocations[allocIndex]);
12439         memset(pAllocations, 0, sizeof(VmaAllocation) * allocationCount);
12440     }
12441 
12442     return res;
12443 }
12444 
AllocatePage(VkDeviceSize size,VkDeviceSize alignment,const VmaAllocationCreateInfo & createInfo,VmaSuballocationType suballocType,VmaAllocation * pAllocation)12445 VkResult VmaBlockVector::AllocatePage(
12446     VkDeviceSize size,
12447     VkDeviceSize alignment,
12448     const VmaAllocationCreateInfo& createInfo,
12449     VmaSuballocationType suballocType,
12450     VmaAllocation* pAllocation)
12451 {
12452     const bool isUpperAddress = (createInfo.flags & VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT) != 0;
12453 
12454     VkDeviceSize freeMemory;
12455     {
12456         const uint32_t heapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex);
12457         VmaBudget heapBudget = {};
12458         m_hAllocator->GetHeapBudgets(&heapBudget, heapIndex, 1);
12459         freeMemory = (heapBudget.usage < heapBudget.budget) ? (heapBudget.budget - heapBudget.usage) : 0;
12460     }
12461 
12462     const bool canFallbackToDedicated = !HasExplicitBlockSize() &&
12463         (createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0;
12464     const bool canCreateNewBlock =
12465         ((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0) &&
12466         (m_Blocks.size() < m_MaxBlockCount) &&
12467         (freeMemory >= size || !canFallbackToDedicated);
12468     uint32_t strategy = createInfo.flags & VMA_ALLOCATION_CREATE_STRATEGY_MASK;
12469 
12470     // Upper address can only be used with linear allocator and within single memory block.
12471     if (isUpperAddress &&
12472         (m_Algorithm != VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT || m_MaxBlockCount > 1))
12473     {
12474         return VK_ERROR_FEATURE_NOT_PRESENT;
12475     }
12476 
12477     // Early reject: requested allocation size is larger that maximum block size for this block vector.
12478     if (size + VMA_DEBUG_MARGIN > m_PreferredBlockSize)
12479     {
12480         return VK_ERROR_OUT_OF_DEVICE_MEMORY;
12481     }
12482 
12483     // 1. Search existing allocations. Try to allocate.
12484     if (m_Algorithm == VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT)
12485     {
12486         // Use only last block.
12487         if (!m_Blocks.empty())
12488         {
12489             VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks.back();
12490             VMA_ASSERT(pCurrBlock);
12491             VkResult res = AllocateFromBlock(
12492                 pCurrBlock, size, alignment, createInfo.flags, createInfo.pUserData, suballocType, strategy, pAllocation);
12493             if (res == VK_SUCCESS)
12494             {
12495                 VMA_DEBUG_LOG("    Returned from last block #%u", pCurrBlock->GetId());
12496                 IncrementallySortBlocks();
12497                 return VK_SUCCESS;
12498             }
12499         }
12500     }
12501     else
12502     {
12503         if (strategy != VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT) // MIN_MEMORY or default
12504         {
12505             const bool isHostVisible =
12506                 (m_hAllocator->m_MemProps.memoryTypes[m_MemoryTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0;
12507             if(isHostVisible)
12508             {
12509                 const bool isMappingAllowed = (createInfo.flags &
12510                     (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0;
12511                 /*
12512                 For non-mappable allocations, check blocks that are not mapped first.
12513                 For mappable allocations, check blocks that are already mapped first.
12514                 This way, having many blocks, we will separate mappable and non-mappable allocations,
12515                 hopefully limiting the number of blocks that are mapped, which will help tools like RenderDoc.
12516                 */
12517                 for(size_t mappingI = 0; mappingI < 2; ++mappingI)
12518                 {
12519                     // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
12520                     for (size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
12521                     {
12522                         VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
12523                         VMA_ASSERT(pCurrBlock);
12524                         const bool isBlockMapped = pCurrBlock->GetMappedData() != VMA_NULL;
12525                         if((mappingI == 0) == (isMappingAllowed == isBlockMapped))
12526                         {
12527                             VkResult res = AllocateFromBlock(
12528                                 pCurrBlock, size, alignment, createInfo.flags, createInfo.pUserData, suballocType, strategy, pAllocation);
12529                             if (res == VK_SUCCESS)
12530                             {
12531                                 VMA_DEBUG_LOG("    Returned from existing block #%u", pCurrBlock->GetId());
12532                                 IncrementallySortBlocks();
12533                                 return VK_SUCCESS;
12534                             }
12535                         }
12536                     }
12537                 }
12538             }
12539             else
12540             {
12541                 // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
12542                 for (size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
12543                 {
12544                     VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
12545                     VMA_ASSERT(pCurrBlock);
12546                     VkResult res = AllocateFromBlock(
12547                         pCurrBlock, size, alignment, createInfo.flags, createInfo.pUserData, suballocType, strategy, pAllocation);
12548                     if (res == VK_SUCCESS)
12549                     {
12550                         VMA_DEBUG_LOG("    Returned from existing block #%u", pCurrBlock->GetId());
12551                         IncrementallySortBlocks();
12552                         return VK_SUCCESS;
12553                     }
12554                 }
12555             }
12556         }
12557         else // VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT
12558         {
12559             // Backward order in m_Blocks - prefer blocks with largest amount of free space.
12560             for (size_t blockIndex = m_Blocks.size(); blockIndex--; )
12561             {
12562                 VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
12563                 VMA_ASSERT(pCurrBlock);
12564                 VkResult res = AllocateFromBlock(pCurrBlock, size, alignment, createInfo.flags, createInfo.pUserData, suballocType, strategy, pAllocation);
12565                 if (res == VK_SUCCESS)
12566                 {
12567                     VMA_DEBUG_LOG("    Returned from existing block #%u", pCurrBlock->GetId());
12568                     IncrementallySortBlocks();
12569                     return VK_SUCCESS;
12570                 }
12571             }
12572         }
12573     }
12574 
12575     // 2. Try to create new block.
12576     if (canCreateNewBlock)
12577     {
12578         // Calculate optimal size for new block.
12579         VkDeviceSize newBlockSize = m_PreferredBlockSize;
12580         uint32_t newBlockSizeShift = 0;
12581         const uint32_t NEW_BLOCK_SIZE_SHIFT_MAX = 3;
12582 
12583         if (!m_ExplicitBlockSize)
12584         {
12585             // Allocate 1/8, 1/4, 1/2 as first blocks.
12586             const VkDeviceSize maxExistingBlockSize = CalcMaxBlockSize();
12587             for (uint32_t i = 0; i < NEW_BLOCK_SIZE_SHIFT_MAX; ++i)
12588             {
12589                 const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
12590                 if (smallerNewBlockSize > maxExistingBlockSize && smallerNewBlockSize >= size * 2)
12591                 {
12592                     newBlockSize = smallerNewBlockSize;
12593                     ++newBlockSizeShift;
12594                 }
12595                 else
12596                 {
12597                     break;
12598                 }
12599             }
12600         }
12601 
12602         size_t newBlockIndex = 0;
12603         VkResult res = (newBlockSize <= freeMemory || !canFallbackToDedicated) ?
12604             CreateBlock(newBlockSize, &newBlockIndex) : VK_ERROR_OUT_OF_DEVICE_MEMORY;
12605         // Allocation of this size failed? Try 1/2, 1/4, 1/8 of m_PreferredBlockSize.
12606         if (!m_ExplicitBlockSize)
12607         {
12608             while (res < 0 && newBlockSizeShift < NEW_BLOCK_SIZE_SHIFT_MAX)
12609             {
12610                 const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
12611                 if (smallerNewBlockSize >= size)
12612                 {
12613                     newBlockSize = smallerNewBlockSize;
12614                     ++newBlockSizeShift;
12615                     res = (newBlockSize <= freeMemory || !canFallbackToDedicated) ?
12616                         CreateBlock(newBlockSize, &newBlockIndex) : VK_ERROR_OUT_OF_DEVICE_MEMORY;
12617                 }
12618                 else
12619                 {
12620                     break;
12621                 }
12622             }
12623         }
12624 
12625         if (res == VK_SUCCESS)
12626         {
12627             VmaDeviceMemoryBlock* const pBlock = m_Blocks[newBlockIndex];
12628             VMA_ASSERT(pBlock->m_pMetadata->GetSize() >= size);
12629 
12630             res = AllocateFromBlock(
12631                 pBlock, size, alignment, createInfo.flags, createInfo.pUserData, suballocType, strategy, pAllocation);
12632             if (res == VK_SUCCESS)
12633             {
12634                 VMA_DEBUG_LOG("    Created new block #%u Size=%llu", pBlock->GetId(), newBlockSize);
12635                 IncrementallySortBlocks();
12636                 return VK_SUCCESS;
12637             }
12638             else
12639             {
12640                 // Allocation from new block failed, possibly due to VMA_DEBUG_MARGIN or alignment.
12641                 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
12642             }
12643         }
12644     }
12645 
12646     return VK_ERROR_OUT_OF_DEVICE_MEMORY;
12647 }
12648 
Free(const VmaAllocation hAllocation)12649 void VmaBlockVector::Free(const VmaAllocation hAllocation)
12650 {
12651     VmaDeviceMemoryBlock* pBlockToDelete = VMA_NULL;
12652 
12653     bool budgetExceeded = false;
12654     {
12655         const uint32_t heapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex);
12656         VmaBudget heapBudget = {};
12657         m_hAllocator->GetHeapBudgets(&heapBudget, heapIndex, 1);
12658         budgetExceeded = heapBudget.usage >= heapBudget.budget;
12659     }
12660 
12661     // Scope for lock.
12662     {
12663         VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
12664 
12665         VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
12666 
12667         if (IsCorruptionDetectionEnabled())
12668         {
12669             VkResult res = pBlock->ValidateMagicValueAfterAllocation(m_hAllocator, hAllocation->GetOffset(), hAllocation->GetSize());
12670             VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to validate magic value.");
12671         }
12672 
12673         if (hAllocation->IsPersistentMap())
12674         {
12675             pBlock->Unmap(m_hAllocator, 1);
12676         }
12677 
12678         const bool hadEmptyBlockBeforeFree = HasEmptyBlock();
12679         pBlock->m_pMetadata->Free(hAllocation->GetAllocHandle());
12680         pBlock->PostFree(m_hAllocator);
12681         VMA_HEAVY_ASSERT(pBlock->Validate());
12682 
12683         VMA_DEBUG_LOG("  Freed from MemoryTypeIndex=%u", m_MemoryTypeIndex);
12684 
12685         const bool canDeleteBlock = m_Blocks.size() > m_MinBlockCount;
12686         // pBlock became empty after this deallocation.
12687         if (pBlock->m_pMetadata->IsEmpty())
12688         {
12689             // Already had empty block. We don't want to have two, so delete this one.
12690             if ((hadEmptyBlockBeforeFree || budgetExceeded) && canDeleteBlock)
12691             {
12692                 pBlockToDelete = pBlock;
12693                 Remove(pBlock);
12694             }
12695             // else: We now have one empty block - leave it. A hysteresis to avoid allocating whole block back and forth.
12696         }
12697         // pBlock didn't become empty, but we have another empty block - find and free that one.
12698         // (This is optional, heuristics.)
12699         else if (hadEmptyBlockBeforeFree && canDeleteBlock)
12700         {
12701             VmaDeviceMemoryBlock* pLastBlock = m_Blocks.back();
12702             if (pLastBlock->m_pMetadata->IsEmpty())
12703             {
12704                 pBlockToDelete = pLastBlock;
12705                 m_Blocks.pop_back();
12706             }
12707         }
12708 
12709         IncrementallySortBlocks();
12710     }
12711 
12712     // Destruction of a free block. Deferred until this point, outside of mutex
12713     // lock, for performance reason.
12714     if (pBlockToDelete != VMA_NULL)
12715     {
12716         VMA_DEBUG_LOG("    Deleted empty block #%u", pBlockToDelete->GetId());
12717         pBlockToDelete->Destroy(m_hAllocator);
12718         vma_delete(m_hAllocator, pBlockToDelete);
12719     }
12720 
12721     m_hAllocator->m_Budget.RemoveAllocation(m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex), hAllocation->GetSize());
12722     m_hAllocator->m_AllocationObjectAllocator.Free(hAllocation);
12723 }
12724 
CalcMaxBlockSize()12725 VkDeviceSize VmaBlockVector::CalcMaxBlockSize() const
12726 {
12727     VkDeviceSize result = 0;
12728     for (size_t i = m_Blocks.size(); i--; )
12729     {
12730         result = VMA_MAX(result, m_Blocks[i]->m_pMetadata->GetSize());
12731         if (result >= m_PreferredBlockSize)
12732         {
12733             break;
12734         }
12735     }
12736     return result;
12737 }
12738 
Remove(VmaDeviceMemoryBlock * pBlock)12739 void VmaBlockVector::Remove(VmaDeviceMemoryBlock* pBlock)
12740 {
12741     for (uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
12742     {
12743         if (m_Blocks[blockIndex] == pBlock)
12744         {
12745             VmaVectorRemove(m_Blocks, blockIndex);
12746             return;
12747         }
12748     }
12749     VMA_ASSERT(0);
12750 }
12751 
IncrementallySortBlocks()12752 void VmaBlockVector::IncrementallySortBlocks()
12753 {
12754     if (!m_IncrementalSort)
12755         return;
12756     if (m_Algorithm != VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT)
12757     {
12758         // Bubble sort only until first swap.
12759         for (size_t i = 1; i < m_Blocks.size(); ++i)
12760         {
12761             if (m_Blocks[i - 1]->m_pMetadata->GetSumFreeSize() > m_Blocks[i]->m_pMetadata->GetSumFreeSize())
12762             {
12763                 VMA_SWAP(m_Blocks[i - 1], m_Blocks[i]);
12764                 return;
12765             }
12766         }
12767     }
12768 }
12769 
SortByFreeSize()12770 void VmaBlockVector::SortByFreeSize()
12771 {
12772     VMA_SORT(m_Blocks.begin(), m_Blocks.end(),
12773         [](auto* b1, auto* b2)
12774         {
12775             return b1->m_pMetadata->GetSumFreeSize() < b2->m_pMetadata->GetSumFreeSize();
12776         });
12777 }
12778 
AllocateFromBlock(VmaDeviceMemoryBlock * pBlock,VkDeviceSize size,VkDeviceSize alignment,VmaAllocationCreateFlags allocFlags,void * pUserData,VmaSuballocationType suballocType,uint32_t strategy,VmaAllocation * pAllocation)12779 VkResult VmaBlockVector::AllocateFromBlock(
12780     VmaDeviceMemoryBlock* pBlock,
12781     VkDeviceSize size,
12782     VkDeviceSize alignment,
12783     VmaAllocationCreateFlags allocFlags,
12784     void* pUserData,
12785     VmaSuballocationType suballocType,
12786     uint32_t strategy,
12787     VmaAllocation* pAllocation)
12788 {
12789     const bool isUpperAddress = (allocFlags & VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT) != 0;
12790 
12791     VmaAllocationRequest currRequest = {};
12792     if (pBlock->m_pMetadata->CreateAllocationRequest(
12793         size,
12794         alignment,
12795         isUpperAddress,
12796         suballocType,
12797         strategy,
12798         &currRequest))
12799     {
12800         return CommitAllocationRequest(currRequest, pBlock, alignment, allocFlags, pUserData, suballocType, pAllocation);
12801     }
12802     return VK_ERROR_OUT_OF_DEVICE_MEMORY;
12803 }
12804 
CommitAllocationRequest(VmaAllocationRequest & allocRequest,VmaDeviceMemoryBlock * pBlock,VkDeviceSize alignment,VmaAllocationCreateFlags allocFlags,void * pUserData,VmaSuballocationType suballocType,VmaAllocation * pAllocation)12805 VkResult VmaBlockVector::CommitAllocationRequest(
12806     VmaAllocationRequest& allocRequest,
12807     VmaDeviceMemoryBlock* pBlock,
12808     VkDeviceSize alignment,
12809     VmaAllocationCreateFlags allocFlags,
12810     void* pUserData,
12811     VmaSuballocationType suballocType,
12812     VmaAllocation* pAllocation)
12813 {
12814     const bool mapped = (allocFlags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0;
12815     const bool isUserDataString = (allocFlags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0;
12816     const bool isMappingAllowed = (allocFlags &
12817         (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0;
12818 
12819     pBlock->PostAlloc();
12820     // Allocate from pCurrBlock.
12821     if (mapped)
12822     {
12823         VkResult res = pBlock->Map(m_hAllocator, 1, VMA_NULL);
12824         if (res != VK_SUCCESS)
12825         {
12826             return res;
12827         }
12828     }
12829 
12830     *pAllocation = m_hAllocator->m_AllocationObjectAllocator.Allocate(isMappingAllowed);
12831     pBlock->m_pMetadata->Alloc(allocRequest, suballocType, *pAllocation);
12832     (*pAllocation)->InitBlockAllocation(
12833         pBlock,
12834         allocRequest.allocHandle,
12835         alignment,
12836         allocRequest.size, // Not size, as actual allocation size may be larger than requested!
12837         m_MemoryTypeIndex,
12838         suballocType,
12839         mapped);
12840     VMA_HEAVY_ASSERT(pBlock->Validate());
12841     if (isUserDataString)
12842         (*pAllocation)->SetName(m_hAllocator, (const char*)pUserData);
12843     else
12844         (*pAllocation)->SetUserData(m_hAllocator, pUserData);
12845     m_hAllocator->m_Budget.AddAllocation(m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex), allocRequest.size);
12846     if (VMA_DEBUG_INITIALIZE_ALLOCATIONS)
12847     {
12848         m_hAllocator->FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
12849     }
12850     if (IsCorruptionDetectionEnabled())
12851     {
12852         VkResult res = pBlock->WriteMagicValueAfterAllocation(m_hAllocator, (*pAllocation)->GetOffset(), allocRequest.size);
12853         VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to write magic value.");
12854     }
12855     return VK_SUCCESS;
12856 }
12857 
CreateBlock(VkDeviceSize blockSize,size_t * pNewBlockIndex)12858 VkResult VmaBlockVector::CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex)
12859 {
12860     VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
12861     allocInfo.pNext = m_pMemoryAllocateNext;
12862     allocInfo.memoryTypeIndex = m_MemoryTypeIndex;
12863     allocInfo.allocationSize = blockSize;
12864 
12865 #if VMA_BUFFER_DEVICE_ADDRESS
12866     // Every standalone block can potentially contain a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT - always enable the feature.
12867     VkMemoryAllocateFlagsInfoKHR allocFlagsInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_FLAGS_INFO_KHR };
12868     if (m_hAllocator->m_UseKhrBufferDeviceAddress)
12869     {
12870         allocFlagsInfo.flags = VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT_KHR;
12871         VmaPnextChainPushFront(&allocInfo, &allocFlagsInfo);
12872     }
12873 #endif // VMA_BUFFER_DEVICE_ADDRESS
12874 
12875 #if VMA_MEMORY_PRIORITY
12876     VkMemoryPriorityAllocateInfoEXT priorityInfo = { VK_STRUCTURE_TYPE_MEMORY_PRIORITY_ALLOCATE_INFO_EXT };
12877     if (m_hAllocator->m_UseExtMemoryPriority)
12878     {
12879         VMA_ASSERT(m_Priority >= 0.f && m_Priority <= 1.f);
12880         priorityInfo.priority = m_Priority;
12881         VmaPnextChainPushFront(&allocInfo, &priorityInfo);
12882     }
12883 #endif // VMA_MEMORY_PRIORITY
12884 
12885 #if VMA_EXTERNAL_MEMORY
12886     // Attach VkExportMemoryAllocateInfoKHR if necessary.
12887     VkExportMemoryAllocateInfoKHR exportMemoryAllocInfo = { VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO_KHR };
12888     exportMemoryAllocInfo.handleTypes = m_hAllocator->GetExternalMemoryHandleTypeFlags(m_MemoryTypeIndex);
12889     if (exportMemoryAllocInfo.handleTypes != 0)
12890     {
12891         VmaPnextChainPushFront(&allocInfo, &exportMemoryAllocInfo);
12892     }
12893 #endif // VMA_EXTERNAL_MEMORY
12894 
12895     VkDeviceMemory mem = VK_NULL_HANDLE;
12896     VkResult res = m_hAllocator->AllocateVulkanMemory(&allocInfo, &mem);
12897     if (res < 0)
12898     {
12899         return res;
12900     }
12901 
12902     // New VkDeviceMemory successfully created.
12903 
12904     // Create new Allocation for it.
12905     VmaDeviceMemoryBlock* const pBlock = vma_new(m_hAllocator, VmaDeviceMemoryBlock)(m_hAllocator);
12906     pBlock->Init(
12907         m_hAllocator,
12908         m_hParentPool,
12909         m_MemoryTypeIndex,
12910         mem,
12911         allocInfo.allocationSize,
12912         m_NextBlockId++,
12913         m_Algorithm,
12914         m_BufferImageGranularity);
12915 
12916     m_Blocks.push_back(pBlock);
12917     if (pNewBlockIndex != VMA_NULL)
12918     {
12919         *pNewBlockIndex = m_Blocks.size() - 1;
12920     }
12921 
12922     return VK_SUCCESS;
12923 }
12924 
HasEmptyBlock()12925 bool VmaBlockVector::HasEmptyBlock()
12926 {
12927     for (size_t index = 0, count = m_Blocks.size(); index < count; ++index)
12928     {
12929         VmaDeviceMemoryBlock* const pBlock = m_Blocks[index];
12930         if (pBlock->m_pMetadata->IsEmpty())
12931         {
12932             return true;
12933         }
12934     }
12935     return false;
12936 }
12937 
12938 #if VMA_STATS_STRING_ENABLED
PrintDetailedMap(class VmaJsonWriter & json)12939 void VmaBlockVector::PrintDetailedMap(class VmaJsonWriter& json)
12940 {
12941     VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
12942 
12943 
12944     json.BeginObject();
12945     for (size_t i = 0; i < m_Blocks.size(); ++i)
12946     {
12947         json.BeginString();
12948         json.ContinueString(m_Blocks[i]->GetId());
12949         json.EndString();
12950 
12951         json.BeginObject();
12952         json.WriteString("MapRefCount");
12953         json.WriteNumber(m_Blocks[i]->GetMapRefCount());
12954 
12955         m_Blocks[i]->m_pMetadata->PrintDetailedMap(json);
12956         json.EndObject();
12957     }
12958     json.EndObject();
12959 }
12960 #endif // VMA_STATS_STRING_ENABLED
12961 
CheckCorruption()12962 VkResult VmaBlockVector::CheckCorruption()
12963 {
12964     if (!IsCorruptionDetectionEnabled())
12965     {
12966         return VK_ERROR_FEATURE_NOT_PRESENT;
12967     }
12968 
12969     VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
12970     for (uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
12971     {
12972         VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
12973         VMA_ASSERT(pBlock);
12974         VkResult res = pBlock->CheckCorruption(m_hAllocator);
12975         if (res != VK_SUCCESS)
12976         {
12977             return res;
12978         }
12979     }
12980     return VK_SUCCESS;
12981 }
12982 
12983 #endif // _VMA_BLOCK_VECTOR_FUNCTIONS
12984 
12985 #ifndef _VMA_DEFRAGMENTATION_CONTEXT_FUNCTIONS
VmaDefragmentationContext_T(VmaAllocator hAllocator,const VmaDefragmentationInfo & info)12986 VmaDefragmentationContext_T::VmaDefragmentationContext_T(
12987     VmaAllocator hAllocator,
12988     const VmaDefragmentationInfo& info)
12989     : m_MaxPassBytes(info.maxBytesPerPass == 0 ? VK_WHOLE_SIZE : info.maxBytesPerPass),
12990     m_MaxPassAllocations(info.maxAllocationsPerPass == 0 ? UINT32_MAX : info.maxAllocationsPerPass),
12991     m_MoveAllocator(hAllocator->GetAllocationCallbacks()),
12992     m_Moves(m_MoveAllocator)
12993 {
12994     m_Algorithm = info.flags & VMA_DEFRAGMENTATION_FLAG_ALGORITHM_MASK;
12995 
12996     if (info.pool != VMA_NULL)
12997     {
12998         m_BlockVectorCount = 1;
12999         m_PoolBlockVector = &info.pool->m_BlockVector;
13000         m_pBlockVectors = &m_PoolBlockVector;
13001         m_PoolBlockVector->SetIncrementalSort(false);
13002         m_PoolBlockVector->SortByFreeSize();
13003     }
13004     else
13005     {
13006         m_BlockVectorCount = hAllocator->GetMemoryTypeCount();
13007         m_PoolBlockVector = VMA_NULL;
13008         m_pBlockVectors = hAllocator->m_pBlockVectors;
13009         for (uint32_t i = 0; i < m_BlockVectorCount; ++i)
13010         {
13011             VmaBlockVector* vector = m_pBlockVectors[i];
13012             if (vector != VMA_NULL)
13013             {
13014                 vector->SetIncrementalSort(false);
13015                 vector->SortByFreeSize();
13016             }
13017         }
13018     }
13019 
13020     switch (m_Algorithm)
13021     {
13022     case 0: // Default algorithm
13023         m_Algorithm = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT;
13024     case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT:
13025     {
13026         m_AlgorithmState = vma_new_array(hAllocator, StateBalanced, m_BlockVectorCount);
13027         break;
13028     }
13029     case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT:
13030     {
13031         if (hAllocator->GetBufferImageGranularity() > 1)
13032         {
13033             m_AlgorithmState = vma_new_array(hAllocator, StateExtensive, m_BlockVectorCount);
13034         }
13035         break;
13036     }
13037     }
13038 }
13039 
~VmaDefragmentationContext_T()13040 VmaDefragmentationContext_T::~VmaDefragmentationContext_T()
13041 {
13042     if (m_PoolBlockVector != VMA_NULL)
13043     {
13044         m_PoolBlockVector->SetIncrementalSort(true);
13045     }
13046     else
13047     {
13048         for (uint32_t i = 0; i < m_BlockVectorCount; ++i)
13049         {
13050             VmaBlockVector* vector = m_pBlockVectors[i];
13051             if (vector != VMA_NULL)
13052                 vector->SetIncrementalSort(true);
13053         }
13054     }
13055 
13056     if (m_AlgorithmState)
13057     {
13058         switch (m_Algorithm)
13059         {
13060         case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT:
13061             vma_delete_array(m_MoveAllocator.m_pCallbacks, reinterpret_cast<StateBalanced*>(m_AlgorithmState), m_BlockVectorCount);
13062             break;
13063         case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT:
13064             vma_delete_array(m_MoveAllocator.m_pCallbacks, reinterpret_cast<StateExtensive*>(m_AlgorithmState), m_BlockVectorCount);
13065             break;
13066         default:
13067             VMA_ASSERT(0);
13068         }
13069     }
13070 }
13071 
DefragmentPassBegin(VmaDefragmentationPassMoveInfo & moveInfo)13072 VkResult VmaDefragmentationContext_T::DefragmentPassBegin(VmaDefragmentationPassMoveInfo& moveInfo)
13073 {
13074     if (m_PoolBlockVector != VMA_NULL)
13075     {
13076         VmaMutexLockWrite lock(m_PoolBlockVector->GetMutex(), m_PoolBlockVector->GetAllocator()->m_UseMutex);
13077 
13078         if (m_PoolBlockVector->GetBlockCount() > 1)
13079             ComputeDefragmentation(*m_PoolBlockVector, 0);
13080         else if (m_PoolBlockVector->GetBlockCount() == 1)
13081             ReallocWithinBlock(*m_PoolBlockVector, m_PoolBlockVector->GetBlock(0));
13082     }
13083     else
13084     {
13085         for (uint32_t i = 0; i < m_BlockVectorCount; ++i)
13086         {
13087             if (m_pBlockVectors[i] != VMA_NULL)
13088             {
13089                 VmaMutexLockWrite lock(m_pBlockVectors[i]->GetMutex(), m_pBlockVectors[i]->GetAllocator()->m_UseMutex);
13090 
13091                 if (m_pBlockVectors[i]->GetBlockCount() > 1)
13092                 {
13093                     if (ComputeDefragmentation(*m_pBlockVectors[i], i))
13094                         break;
13095                 }
13096                 else if (m_pBlockVectors[i]->GetBlockCount() == 1)
13097                 {
13098                     if (ReallocWithinBlock(*m_pBlockVectors[i], m_pBlockVectors[i]->GetBlock(0)))
13099                         break;
13100                 }
13101             }
13102         }
13103     }
13104 
13105     moveInfo.moveCount = static_cast<uint32_t>(m_Moves.size());
13106     if (moveInfo.moveCount > 0)
13107     {
13108         moveInfo.pMoves = m_Moves.data();
13109         return VK_INCOMPLETE;
13110     }
13111 
13112     moveInfo.pMoves = VMA_NULL;
13113     return VK_SUCCESS;
13114 }
13115 
DefragmentPassEnd(VmaDefragmentationPassMoveInfo & moveInfo)13116 VkResult VmaDefragmentationContext_T::DefragmentPassEnd(VmaDefragmentationPassMoveInfo& moveInfo)
13117 {
13118     VMA_ASSERT(moveInfo.moveCount > 0 ? moveInfo.pMoves != VMA_NULL : true);
13119 
13120     VkResult result = VK_SUCCESS;
13121     VmaStlAllocator<FragmentedBlock> blockAllocator(m_MoveAllocator.m_pCallbacks);
13122     VmaVector<FragmentedBlock, VmaStlAllocator<FragmentedBlock>> immovableBlocks(blockAllocator);
13123     VmaVector<FragmentedBlock, VmaStlAllocator<FragmentedBlock>> mappedBlocks(blockAllocator);
13124 
13125     VmaAllocator allocator = VMA_NULL;
13126     for (uint32_t i = 0; i < moveInfo.moveCount; ++i)
13127     {
13128         VmaDefragmentationMove& move = moveInfo.pMoves[i];
13129         size_t prevCount = 0, currentCount = 0;
13130         VkDeviceSize freedBlockSize = 0;
13131 
13132         uint32_t vectorIndex;
13133         VmaBlockVector* vector;
13134         if (m_PoolBlockVector != VMA_NULL)
13135         {
13136             vectorIndex = 0;
13137             vector = m_PoolBlockVector;
13138         }
13139         else
13140         {
13141             vectorIndex = move.srcAllocation->GetMemoryTypeIndex();
13142             vector = m_pBlockVectors[vectorIndex];
13143             VMA_ASSERT(vector != VMA_NULL);
13144         }
13145 
13146         switch (move.operation)
13147         {
13148         case VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY:
13149         {
13150             uint8_t mapCount = move.srcAllocation->SwapBlockAllocation(vector->m_hAllocator, move.dstTmpAllocation);
13151             if (mapCount > 0)
13152             {
13153                 allocator = vector->m_hAllocator;
13154                 VmaDeviceMemoryBlock* newMapBlock = move.srcAllocation->GetBlock();
13155                 bool notPresent = true;
13156                 for (FragmentedBlock& block : mappedBlocks)
13157                 {
13158                     if (block.block == newMapBlock)
13159                     {
13160                         notPresent = false;
13161                         block.data += mapCount;
13162                         break;
13163                     }
13164                 }
13165                 if (notPresent)
13166                     mappedBlocks.push_back({ mapCount, newMapBlock });
13167             }
13168 
13169             // Scope for locks, Free have it's own lock
13170             {
13171                 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
13172                 prevCount = vector->GetBlockCount();
13173                 freedBlockSize = move.dstTmpAllocation->GetBlock()->m_pMetadata->GetSize();
13174             }
13175             vector->Free(move.dstTmpAllocation);
13176             {
13177                 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
13178                 currentCount = vector->GetBlockCount();
13179             }
13180 
13181             result = VK_INCOMPLETE;
13182             break;
13183         }
13184         case VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE:
13185         {
13186             m_PassStats.bytesMoved -= move.srcAllocation->GetSize();
13187             --m_PassStats.allocationsMoved;
13188             vector->Free(move.dstTmpAllocation);
13189 
13190             VmaDeviceMemoryBlock* newBlock = move.srcAllocation->GetBlock();
13191             bool notPresent = true;
13192             for (const FragmentedBlock& block : immovableBlocks)
13193             {
13194                 if (block.block == newBlock)
13195                 {
13196                     notPresent = false;
13197                     break;
13198                 }
13199             }
13200             if (notPresent)
13201                 immovableBlocks.push_back({ vectorIndex, newBlock });
13202             break;
13203         }
13204         case VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY:
13205         {
13206             m_PassStats.bytesMoved -= move.srcAllocation->GetSize();
13207             --m_PassStats.allocationsMoved;
13208             // Scope for locks, Free have it's own lock
13209             {
13210                 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
13211                 prevCount = vector->GetBlockCount();
13212                 freedBlockSize = move.srcAllocation->GetBlock()->m_pMetadata->GetSize();
13213             }
13214             vector->Free(move.srcAllocation);
13215             {
13216                 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
13217                 currentCount = vector->GetBlockCount();
13218             }
13219             freedBlockSize *= prevCount - currentCount;
13220 
13221             VkDeviceSize dstBlockSize;
13222             {
13223                 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
13224                 dstBlockSize = move.dstTmpAllocation->GetBlock()->m_pMetadata->GetSize();
13225             }
13226             vector->Free(move.dstTmpAllocation);
13227             {
13228                 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
13229                 freedBlockSize += dstBlockSize * (currentCount - vector->GetBlockCount());
13230                 currentCount = vector->GetBlockCount();
13231             }
13232 
13233             result = VK_INCOMPLETE;
13234             break;
13235         }
13236         default:
13237             VMA_ASSERT(0);
13238         }
13239 
13240         if (prevCount > currentCount)
13241         {
13242             size_t freedBlocks = prevCount - currentCount;
13243             m_PassStats.deviceMemoryBlocksFreed += static_cast<uint32_t>(freedBlocks);
13244             m_PassStats.bytesFreed += freedBlockSize;
13245         }
13246 
13247         switch (m_Algorithm)
13248         {
13249         case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT:
13250         {
13251             if (m_AlgorithmState != VMA_NULL)
13252             {
13253                 // Avoid unnecessary tries to allocate when new free block is avaiable
13254                 StateExtensive& state = reinterpret_cast<StateExtensive*>(m_AlgorithmState)[vectorIndex];
13255                 if (state.firstFreeBlock != SIZE_MAX)
13256                 {
13257                     const size_t diff = prevCount - currentCount;
13258                     if (state.firstFreeBlock >= diff)
13259                     {
13260                         state.firstFreeBlock -= diff;
13261                         if (state.firstFreeBlock != 0)
13262                             state.firstFreeBlock -= vector->GetBlock(state.firstFreeBlock - 1)->m_pMetadata->IsEmpty();
13263                     }
13264                     else
13265                         state.firstFreeBlock = 0;
13266                 }
13267             }
13268         }
13269         }
13270     }
13271     moveInfo.moveCount = 0;
13272     moveInfo.pMoves = VMA_NULL;
13273     m_Moves.clear();
13274 
13275     // Update stats
13276     m_GlobalStats.allocationsMoved += m_PassStats.allocationsMoved;
13277     m_GlobalStats.bytesFreed += m_PassStats.bytesFreed;
13278     m_GlobalStats.bytesMoved += m_PassStats.bytesMoved;
13279     m_GlobalStats.deviceMemoryBlocksFreed += m_PassStats.deviceMemoryBlocksFreed;
13280     m_PassStats = { 0 };
13281 
13282     // Move blocks with immovable allocations according to algorithm
13283     if (immovableBlocks.size() > 0)
13284     {
13285         switch (m_Algorithm)
13286         {
13287         case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT:
13288         {
13289             if (m_AlgorithmState != VMA_NULL)
13290             {
13291                 bool swapped = false;
13292                 // Move to the start of free blocks range
13293                 for (const FragmentedBlock& block : immovableBlocks)
13294                 {
13295                     StateExtensive& state = reinterpret_cast<StateExtensive*>(m_AlgorithmState)[block.data];
13296                     if (state.operation != StateExtensive::Operation::Cleanup)
13297                     {
13298                         VmaBlockVector* vector = m_pBlockVectors[block.data];
13299                         VmaMutexLockWrite lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
13300 
13301                         for (size_t i = 0, count = vector->GetBlockCount() - m_ImmovableBlockCount; i < count; ++i)
13302                         {
13303                             if (vector->GetBlock(i) == block.block)
13304                             {
13305                                 VMA_SWAP(vector->m_Blocks[i], vector->m_Blocks[vector->GetBlockCount() - ++m_ImmovableBlockCount]);
13306                                 if (state.firstFreeBlock != SIZE_MAX)
13307                                 {
13308                                     if (i + 1 < state.firstFreeBlock)
13309                                     {
13310                                         if (state.firstFreeBlock > 1)
13311                                             VMA_SWAP(vector->m_Blocks[i], vector->m_Blocks[--state.firstFreeBlock]);
13312                                         else
13313                                             --state.firstFreeBlock;
13314                                     }
13315                                 }
13316                                 swapped = true;
13317                                 break;
13318                             }
13319                         }
13320                     }
13321                 }
13322                 if (swapped)
13323                     result = VK_INCOMPLETE;
13324                 break;
13325             }
13326         }
13327         default:
13328         {
13329             // Move to the begining
13330             for (const FragmentedBlock& block : immovableBlocks)
13331             {
13332                 VmaBlockVector* vector = m_pBlockVectors[block.data];
13333                 VmaMutexLockWrite lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
13334 
13335                 for (size_t i = m_ImmovableBlockCount; i < vector->GetBlockCount(); ++i)
13336                 {
13337                     if (vector->GetBlock(i) == block.block)
13338                     {
13339                         VMA_SWAP(vector->m_Blocks[i], vector->m_Blocks[m_ImmovableBlockCount++]);
13340                         break;
13341                     }
13342                 }
13343             }
13344             break;
13345         }
13346         }
13347     }
13348 
13349     // Bulk-map destination blocks
13350     for (const FragmentedBlock& block : mappedBlocks)
13351     {
13352         VkResult res = block.block->Map(allocator, block.data, VMA_NULL);
13353         VMA_ASSERT(res == VK_SUCCESS);
13354     }
13355     return result;
13356 }
13357 
ComputeDefragmentation(VmaBlockVector & vector,size_t index)13358 bool VmaDefragmentationContext_T::ComputeDefragmentation(VmaBlockVector& vector, size_t index)
13359 {
13360     switch (m_Algorithm)
13361     {
13362     case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT:
13363         return ComputeDefragmentation_Fast(vector);
13364     default:
13365         VMA_ASSERT(0);
13366     case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT:
13367         return ComputeDefragmentation_Balanced(vector, index, true);
13368     case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT:
13369         return ComputeDefragmentation_Full(vector);
13370     case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT:
13371         return ComputeDefragmentation_Extensive(vector, index);
13372     }
13373 }
13374 
GetMoveData(VmaAllocHandle handle,VmaBlockMetadata * metadata)13375 VmaDefragmentationContext_T::MoveAllocationData VmaDefragmentationContext_T::GetMoveData(
13376     VmaAllocHandle handle, VmaBlockMetadata* metadata)
13377 {
13378     MoveAllocationData moveData;
13379     moveData.move.srcAllocation = (VmaAllocation)metadata->GetAllocationUserData(handle);
13380     moveData.size = moveData.move.srcAllocation->GetSize();
13381     moveData.alignment = moveData.move.srcAllocation->GetAlignment();
13382     moveData.type = moveData.move.srcAllocation->GetSuballocationType();
13383     moveData.flags = 0;
13384 
13385     if (moveData.move.srcAllocation->IsPersistentMap())
13386         moveData.flags |= VMA_ALLOCATION_CREATE_MAPPED_BIT;
13387     if (moveData.move.srcAllocation->IsMappingAllowed())
13388         moveData.flags |= VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT;
13389 
13390     return moveData;
13391 }
13392 
CheckCounters(VkDeviceSize bytes)13393 VmaDefragmentationContext_T::CounterStatus VmaDefragmentationContext_T::CheckCounters(VkDeviceSize bytes)
13394 {
13395     // Ignore allocation if will exceed max size for copy
13396     if (m_PassStats.bytesMoved + bytes > m_MaxPassBytes)
13397     {
13398         if (++m_IgnoredAllocs < MAX_ALLOCS_TO_IGNORE)
13399             return CounterStatus::Ignore;
13400         else
13401             return CounterStatus::End;
13402     }
13403     return CounterStatus::Pass;
13404 }
13405 
IncrementCounters(VkDeviceSize bytes)13406 bool VmaDefragmentationContext_T::IncrementCounters(VkDeviceSize bytes)
13407 {
13408     m_PassStats.bytesMoved += bytes;
13409     // Early return when max found
13410     if (++m_PassStats.allocationsMoved >= m_MaxPassAllocations || m_PassStats.bytesMoved >= m_MaxPassBytes)
13411     {
13412         VMA_ASSERT(m_PassStats.allocationsMoved == m_MaxPassAllocations ||
13413             m_PassStats.bytesMoved == m_MaxPassBytes && "Exceeded maximal pass threshold!");
13414         return true;
13415     }
13416     return false;
13417 }
13418 
ReallocWithinBlock(VmaBlockVector & vector,VmaDeviceMemoryBlock * block)13419 bool VmaDefragmentationContext_T::ReallocWithinBlock(VmaBlockVector& vector, VmaDeviceMemoryBlock* block)
13420 {
13421     VmaBlockMetadata* metadata = block->m_pMetadata;
13422 
13423     for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
13424         handle != VK_NULL_HANDLE;
13425         handle = metadata->GetNextAllocation(handle))
13426     {
13427         MoveAllocationData moveData = GetMoveData(handle, metadata);
13428         // Ignore newly created allocations by defragmentation algorithm
13429         if (moveData.move.srcAllocation->GetUserData() == this)
13430             continue;
13431         switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
13432         {
13433         case CounterStatus::Ignore:
13434             continue;
13435         case CounterStatus::End:
13436             return true;
13437         default:
13438             VMA_ASSERT(0);
13439         case CounterStatus::Pass:
13440             break;
13441         }
13442 
13443         VkDeviceSize offset = moveData.move.srcAllocation->GetOffset();
13444         if (offset != 0 && metadata->GetSumFreeSize() >= moveData.size)
13445         {
13446             VmaAllocationRequest request = {};
13447             if (metadata->CreateAllocationRequest(
13448                 moveData.size,
13449                 moveData.alignment,
13450                 false,
13451                 moveData.type,
13452                 VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
13453                 &request))
13454             {
13455                 if (metadata->GetAllocationOffset(request.allocHandle) < offset)
13456                 {
13457                     if (vector.CommitAllocationRequest(
13458                         request,
13459                         block,
13460                         moveData.alignment,
13461                         moveData.flags,
13462                         this,
13463                         moveData.type,
13464                         &moveData.move.dstTmpAllocation) == VK_SUCCESS)
13465                     {
13466                         m_Moves.push_back(moveData.move);
13467                         if (IncrementCounters(moveData.size))
13468                             return true;
13469                     }
13470                 }
13471             }
13472         }
13473     }
13474     return false;
13475 }
13476 
AllocInOtherBlock(size_t start,size_t end,MoveAllocationData & data,VmaBlockVector & vector)13477 bool VmaDefragmentationContext_T::AllocInOtherBlock(size_t start, size_t end, MoveAllocationData& data, VmaBlockVector& vector)
13478 {
13479     for (; start < end; ++start)
13480     {
13481         VmaDeviceMemoryBlock* dstBlock = vector.GetBlock(start);
13482         if (dstBlock->m_pMetadata->GetSumFreeSize() >= data.size)
13483         {
13484             if (vector.AllocateFromBlock(dstBlock,
13485                 data.size,
13486                 data.alignment,
13487                 data.flags,
13488                 this,
13489                 data.type,
13490                 0,
13491                 &data.move.dstTmpAllocation) == VK_SUCCESS)
13492             {
13493                 m_Moves.push_back(data.move);
13494                 if (IncrementCounters(data.size))
13495                     return true;
13496                 break;
13497             }
13498         }
13499     }
13500     return false;
13501 }
13502 
ComputeDefragmentation_Fast(VmaBlockVector & vector)13503 bool VmaDefragmentationContext_T::ComputeDefragmentation_Fast(VmaBlockVector& vector)
13504 {
13505     // Move only between blocks
13506 
13507     // Go through allocations in last blocks and try to fit them inside first ones
13508     for (size_t i = vector.GetBlockCount() - 1; i > m_ImmovableBlockCount; --i)
13509     {
13510         VmaBlockMetadata* metadata = vector.GetBlock(i)->m_pMetadata;
13511 
13512         for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
13513             handle != VK_NULL_HANDLE;
13514             handle = metadata->GetNextAllocation(handle))
13515         {
13516             MoveAllocationData moveData = GetMoveData(handle, metadata);
13517             // Ignore newly created allocations by defragmentation algorithm
13518             if (moveData.move.srcAllocation->GetUserData() == this)
13519                 continue;
13520             switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
13521             {
13522             case CounterStatus::Ignore:
13523                 continue;
13524             case CounterStatus::End:
13525                 return true;
13526             default:
13527                 VMA_ASSERT(0);
13528             case CounterStatus::Pass:
13529                 break;
13530             }
13531 
13532             // Check all previous blocks for free space
13533             if (AllocInOtherBlock(0, i, moveData, vector))
13534                 return true;
13535         }
13536     }
13537     return false;
13538 }
13539 
ComputeDefragmentation_Balanced(VmaBlockVector & vector,size_t index,bool update)13540 bool VmaDefragmentationContext_T::ComputeDefragmentation_Balanced(VmaBlockVector& vector, size_t index, bool update)
13541 {
13542     // Go over every allocation and try to fit it in previous blocks at lowest offsets,
13543     // if not possible: realloc within single block to minimize offset (exclude offset == 0),
13544     // but only if there are noticable gaps between them (some heuristic, ex. average size of allocation in block)
13545     VMA_ASSERT(m_AlgorithmState != VMA_NULL);
13546 
13547     StateBalanced& vectorState = reinterpret_cast<StateBalanced*>(m_AlgorithmState)[index];
13548     if (update && vectorState.avgAllocSize == UINT64_MAX)
13549         UpdateVectorStatistics(vector, vectorState);
13550 
13551     const size_t startMoveCount = m_Moves.size();
13552     VkDeviceSize minimalFreeRegion = vectorState.avgFreeSize / 2;
13553     for (size_t i = vector.GetBlockCount() - 1; i > m_ImmovableBlockCount; --i)
13554     {
13555         VmaDeviceMemoryBlock* block = vector.GetBlock(i);
13556         VmaBlockMetadata* metadata = block->m_pMetadata;
13557         VkDeviceSize prevFreeRegionSize = 0;
13558 
13559         for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
13560             handle != VK_NULL_HANDLE;
13561             handle = metadata->GetNextAllocation(handle))
13562         {
13563             MoveAllocationData moveData = GetMoveData(handle, metadata);
13564             // Ignore newly created allocations by defragmentation algorithm
13565             if (moveData.move.srcAllocation->GetUserData() == this)
13566                 continue;
13567             switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
13568             {
13569             case CounterStatus::Ignore:
13570                 continue;
13571             case CounterStatus::End:
13572                 return true;
13573             default:
13574                 VMA_ASSERT(0);
13575             case CounterStatus::Pass:
13576                 break;
13577             }
13578 
13579             // Check all previous blocks for free space
13580             const size_t prevMoveCount = m_Moves.size();
13581             if (AllocInOtherBlock(0, i, moveData, vector))
13582                 return true;
13583 
13584             VkDeviceSize nextFreeRegionSize = metadata->GetNextFreeRegionSize(handle);
13585             // If no room found then realloc within block for lower offset
13586             VkDeviceSize offset = moveData.move.srcAllocation->GetOffset();
13587             if (prevMoveCount == m_Moves.size() && offset != 0 && metadata->GetSumFreeSize() >= moveData.size)
13588             {
13589                 // Check if realloc will make sense
13590                 if (prevFreeRegionSize >= minimalFreeRegion ||
13591                     nextFreeRegionSize >= minimalFreeRegion ||
13592                     moveData.size <= vectorState.avgFreeSize ||
13593                     moveData.size <= vectorState.avgAllocSize)
13594                 {
13595                     VmaAllocationRequest request = {};
13596                     if (metadata->CreateAllocationRequest(
13597                         moveData.size,
13598                         moveData.alignment,
13599                         false,
13600                         moveData.type,
13601                         VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
13602                         &request))
13603                     {
13604                         if (metadata->GetAllocationOffset(request.allocHandle) < offset)
13605                         {
13606                             if (vector.CommitAllocationRequest(
13607                                 request,
13608                                 block,
13609                                 moveData.alignment,
13610                                 moveData.flags,
13611                                 this,
13612                                 moveData.type,
13613                                 &moveData.move.dstTmpAllocation) == VK_SUCCESS)
13614                             {
13615                                 m_Moves.push_back(moveData.move);
13616                                 if (IncrementCounters(moveData.size))
13617                                     return true;
13618                             }
13619                         }
13620                     }
13621                 }
13622             }
13623             prevFreeRegionSize = nextFreeRegionSize;
13624         }
13625     }
13626 
13627     // No moves perfomed, update statistics to current vector state
13628     if (startMoveCount == m_Moves.size() && !update)
13629     {
13630         vectorState.avgAllocSize = UINT64_MAX;
13631         return ComputeDefragmentation_Balanced(vector, index, false);
13632     }
13633     return false;
13634 }
13635 
ComputeDefragmentation_Full(VmaBlockVector & vector)13636 bool VmaDefragmentationContext_T::ComputeDefragmentation_Full(VmaBlockVector& vector)
13637 {
13638     // Go over every allocation and try to fit it in previous blocks at lowest offsets,
13639     // if not possible: realloc within single block to minimize offset (exclude offset == 0)
13640 
13641     for (size_t i = vector.GetBlockCount() - 1; i > m_ImmovableBlockCount; --i)
13642     {
13643         VmaDeviceMemoryBlock* block = vector.GetBlock(i);
13644         VmaBlockMetadata* metadata = block->m_pMetadata;
13645 
13646         for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
13647             handle != VK_NULL_HANDLE;
13648             handle = metadata->GetNextAllocation(handle))
13649         {
13650             MoveAllocationData moveData = GetMoveData(handle, metadata);
13651             // Ignore newly created allocations by defragmentation algorithm
13652             if (moveData.move.srcAllocation->GetUserData() == this)
13653                 continue;
13654             switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
13655             {
13656             case CounterStatus::Ignore:
13657                 continue;
13658             case CounterStatus::End:
13659                 return true;
13660             default:
13661                 VMA_ASSERT(0);
13662             case CounterStatus::Pass:
13663                 break;
13664             }
13665 
13666             // Check all previous blocks for free space
13667             const size_t prevMoveCount = m_Moves.size();
13668             if (AllocInOtherBlock(0, i, moveData, vector))
13669                 return true;
13670 
13671             // If no room found then realloc within block for lower offset
13672             VkDeviceSize offset = moveData.move.srcAllocation->GetOffset();
13673             if (prevMoveCount == m_Moves.size() && offset != 0 && metadata->GetSumFreeSize() >= moveData.size)
13674             {
13675                 VmaAllocationRequest request = {};
13676                 if (metadata->CreateAllocationRequest(
13677                     moveData.size,
13678                     moveData.alignment,
13679                     false,
13680                     moveData.type,
13681                     VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
13682                     &request))
13683                 {
13684                     if (metadata->GetAllocationOffset(request.allocHandle) < offset)
13685                     {
13686                         if (vector.CommitAllocationRequest(
13687                             request,
13688                             block,
13689                             moveData.alignment,
13690                             moveData.flags,
13691                             this,
13692                             moveData.type,
13693                             &moveData.move.dstTmpAllocation) == VK_SUCCESS)
13694                         {
13695                             m_Moves.push_back(moveData.move);
13696                             if (IncrementCounters(moveData.size))
13697                                 return true;
13698                         }
13699                     }
13700                 }
13701             }
13702         }
13703     }
13704     return false;
13705 }
13706 
ComputeDefragmentation_Extensive(VmaBlockVector & vector,size_t index)13707 bool VmaDefragmentationContext_T::ComputeDefragmentation_Extensive(VmaBlockVector& vector, size_t index)
13708 {
13709     // First free single block, then populate it to the brim, then free another block, and so on
13710 
13711     // Fallback to previous algorithm since without granularity conflicts it can achieve max packing
13712     if (vector.m_BufferImageGranularity == 1)
13713         return ComputeDefragmentation_Full(vector);
13714 
13715     VMA_ASSERT(m_AlgorithmState != VMA_NULL);
13716 
13717     StateExtensive& vectorState = reinterpret_cast<StateExtensive*>(m_AlgorithmState)[index];
13718 
13719     bool texturePresent = false, bufferPresent = false, otherPresent = false;
13720     switch (vectorState.operation)
13721     {
13722     case StateExtensive::Operation::Done: // Vector defragmented
13723         return false;
13724     case StateExtensive::Operation::FindFreeBlockBuffer:
13725     case StateExtensive::Operation::FindFreeBlockTexture:
13726     case StateExtensive::Operation::FindFreeBlockAll:
13727     {
13728         // No more blocks to free, just perform fast realloc and move to cleanup
13729         if (vectorState.firstFreeBlock == 0)
13730         {
13731             vectorState.operation = StateExtensive::Operation::Cleanup;
13732             return ComputeDefragmentation_Fast(vector);
13733         }
13734 
13735         // No free blocks, have to clear last one
13736         size_t last = (vectorState.firstFreeBlock == SIZE_MAX ? vector.GetBlockCount() : vectorState.firstFreeBlock) - 1;
13737         VmaBlockMetadata* freeMetadata = vector.GetBlock(last)->m_pMetadata;
13738 
13739         const size_t prevMoveCount = m_Moves.size();
13740         for (VmaAllocHandle handle = freeMetadata->GetAllocationListBegin();
13741             handle != VK_NULL_HANDLE;
13742             handle = freeMetadata->GetNextAllocation(handle))
13743         {
13744             MoveAllocationData moveData = GetMoveData(handle, freeMetadata);
13745             switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
13746             {
13747             case CounterStatus::Ignore:
13748                 continue;
13749             case CounterStatus::End:
13750                 return true;
13751             default:
13752                 VMA_ASSERT(0);
13753             case CounterStatus::Pass:
13754                 break;
13755             }
13756 
13757             // Check all previous blocks for free space
13758             if (AllocInOtherBlock(0, last, moveData, vector))
13759             {
13760                 // Full clear performed already
13761                 if (prevMoveCount != m_Moves.size() && freeMetadata->GetNextAllocation(handle) == VK_NULL_HANDLE)
13762                     reinterpret_cast<size_t*>(m_AlgorithmState)[index] = last;
13763                 return true;
13764             }
13765         }
13766 
13767         if (prevMoveCount == m_Moves.size())
13768         {
13769             // Cannot perform full clear, have to move data in other blocks around
13770             if (last != 0)
13771             {
13772                 for (size_t i = last - 1; i; --i)
13773                 {
13774                     if (ReallocWithinBlock(vector, vector.GetBlock(i)))
13775                         return true;
13776                 }
13777             }
13778 
13779             if (prevMoveCount == m_Moves.size())
13780             {
13781                 // No possible reallocs within blocks, try to move them around fast
13782                 return ComputeDefragmentation_Fast(vector);
13783             }
13784         }
13785         else
13786         {
13787             switch (vectorState.operation)
13788             {
13789             case StateExtensive::Operation::FindFreeBlockBuffer:
13790                 vectorState.operation = StateExtensive::Operation::MoveBuffers;
13791                 break;
13792             default:
13793                 VMA_ASSERT(0);
13794             case StateExtensive::Operation::FindFreeBlockTexture:
13795                 vectorState.operation = StateExtensive::Operation::MoveTextures;
13796                 break;
13797             case StateExtensive::Operation::FindFreeBlockAll:
13798                 vectorState.operation = StateExtensive::Operation::MoveAll;
13799                 break;
13800             }
13801             vectorState.firstFreeBlock = last;
13802             // Nothing done, block found without reallocations, can perform another reallocs in same pass
13803             return ComputeDefragmentation_Extensive(vector, index);
13804         }
13805         break;
13806     }
13807     case StateExtensive::Operation::MoveTextures:
13808     {
13809         if (MoveDataToFreeBlocks(VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL, vector,
13810             vectorState.firstFreeBlock, texturePresent, bufferPresent, otherPresent))
13811         {
13812             if (texturePresent)
13813             {
13814                 vectorState.operation = StateExtensive::Operation::FindFreeBlockTexture;
13815                 return ComputeDefragmentation_Extensive(vector, index);
13816             }
13817 
13818             if (!bufferPresent && !otherPresent)
13819             {
13820                 vectorState.operation = StateExtensive::Operation::Cleanup;
13821                 break;
13822             }
13823 
13824             // No more textures to move, check buffers
13825             vectorState.operation = StateExtensive::Operation::MoveBuffers;
13826             bufferPresent = false;
13827             otherPresent = false;
13828         }
13829         else
13830             break;
13831     }
13832     case StateExtensive::Operation::MoveBuffers:
13833     {
13834         if (MoveDataToFreeBlocks(VMA_SUBALLOCATION_TYPE_BUFFER, vector,
13835             vectorState.firstFreeBlock, texturePresent, bufferPresent, otherPresent))
13836         {
13837             if (bufferPresent)
13838             {
13839                 vectorState.operation = StateExtensive::Operation::FindFreeBlockBuffer;
13840                 return ComputeDefragmentation_Extensive(vector, index);
13841             }
13842 
13843             if (!otherPresent)
13844             {
13845                 vectorState.operation = StateExtensive::Operation::Cleanup;
13846                 break;
13847             }
13848 
13849             // No more buffers to move, check all others
13850             vectorState.operation = StateExtensive::Operation::MoveAll;
13851             otherPresent = false;
13852         }
13853         else
13854             break;
13855     }
13856     case StateExtensive::Operation::MoveAll:
13857     {
13858         if (MoveDataToFreeBlocks(VMA_SUBALLOCATION_TYPE_FREE, vector,
13859             vectorState.firstFreeBlock, texturePresent, bufferPresent, otherPresent))
13860         {
13861             if (otherPresent)
13862             {
13863                 vectorState.operation = StateExtensive::Operation::FindFreeBlockBuffer;
13864                 return ComputeDefragmentation_Extensive(vector, index);
13865             }
13866             // Everything moved
13867             vectorState.operation = StateExtensive::Operation::Cleanup;
13868         }
13869         break;
13870     }
13871     case StateExtensive::Operation::Cleanup:
13872         // Cleanup is handled below so that other operations may reuse the cleanup code. This case is here to prevent the unhandled enum value warning (C4062).
13873         break;
13874     }
13875 
13876     if (vectorState.operation == StateExtensive::Operation::Cleanup)
13877     {
13878         // All other work done, pack data in blocks even tighter if possible
13879         const size_t prevMoveCount = m_Moves.size();
13880         for (size_t i = 0; i < vector.GetBlockCount(); ++i)
13881         {
13882             if (ReallocWithinBlock(vector, vector.GetBlock(i)))
13883                 return true;
13884         }
13885 
13886         if (prevMoveCount == m_Moves.size())
13887             vectorState.operation = StateExtensive::Operation::Done;
13888     }
13889     return false;
13890 }
13891 
UpdateVectorStatistics(VmaBlockVector & vector,StateBalanced & state)13892 void VmaDefragmentationContext_T::UpdateVectorStatistics(VmaBlockVector& vector, StateBalanced& state)
13893 {
13894     size_t allocCount = 0;
13895     size_t freeCount = 0;
13896     state.avgFreeSize = 0;
13897     state.avgAllocSize = 0;
13898 
13899     for (size_t i = 0; i < vector.GetBlockCount(); ++i)
13900     {
13901         VmaBlockMetadata* metadata = vector.GetBlock(i)->m_pMetadata;
13902 
13903         allocCount += metadata->GetAllocationCount();
13904         freeCount += metadata->GetFreeRegionsCount();
13905         state.avgFreeSize += metadata->GetSumFreeSize();
13906         state.avgAllocSize += metadata->GetSize();
13907     }
13908 
13909     state.avgAllocSize = (state.avgAllocSize - state.avgFreeSize) / allocCount;
13910     state.avgFreeSize /= freeCount;
13911 }
13912 
MoveDataToFreeBlocks(VmaSuballocationType currentType,VmaBlockVector & vector,size_t firstFreeBlock,bool & texturePresent,bool & bufferPresent,bool & otherPresent)13913 bool VmaDefragmentationContext_T::MoveDataToFreeBlocks(VmaSuballocationType currentType,
13914     VmaBlockVector& vector, size_t firstFreeBlock,
13915     bool& texturePresent, bool& bufferPresent, bool& otherPresent)
13916 {
13917     const size_t prevMoveCount = m_Moves.size();
13918     for (size_t i = firstFreeBlock ; i;)
13919     {
13920         VmaDeviceMemoryBlock* block = vector.GetBlock(--i);
13921         VmaBlockMetadata* metadata = block->m_pMetadata;
13922 
13923         for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
13924             handle != VK_NULL_HANDLE;
13925             handle = metadata->GetNextAllocation(handle))
13926         {
13927             MoveAllocationData moveData = GetMoveData(handle, metadata);
13928             // Ignore newly created allocations by defragmentation algorithm
13929             if (moveData.move.srcAllocation->GetUserData() == this)
13930                 continue;
13931             switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
13932             {
13933             case CounterStatus::Ignore:
13934                 continue;
13935             case CounterStatus::End:
13936                 return true;
13937             default:
13938                 VMA_ASSERT(0);
13939             case CounterStatus::Pass:
13940                 break;
13941             }
13942 
13943             // Move only single type of resources at once
13944             if (!VmaIsBufferImageGranularityConflict(moveData.type, currentType))
13945             {
13946                 // Try to fit allocation into free blocks
13947                 if (AllocInOtherBlock(firstFreeBlock, vector.GetBlockCount(), moveData, vector))
13948                     return false;
13949             }
13950 
13951             if (!VmaIsBufferImageGranularityConflict(moveData.type, VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL))
13952                 texturePresent = true;
13953             else if (!VmaIsBufferImageGranularityConflict(moveData.type, VMA_SUBALLOCATION_TYPE_BUFFER))
13954                 bufferPresent = true;
13955             else
13956                 otherPresent = true;
13957         }
13958     }
13959     return prevMoveCount == m_Moves.size();
13960 }
13961 #endif // _VMA_DEFRAGMENTATION_CONTEXT_FUNCTIONS
13962 
13963 #ifndef _VMA_POOL_T_FUNCTIONS
VmaPool_T(VmaAllocator hAllocator,const VmaPoolCreateInfo & createInfo,VkDeviceSize preferredBlockSize)13964 VmaPool_T::VmaPool_T(
13965     VmaAllocator hAllocator,
13966     const VmaPoolCreateInfo& createInfo,
13967     VkDeviceSize preferredBlockSize)
13968     : m_BlockVector(
13969         hAllocator,
13970         this, // hParentPool
13971         createInfo.memoryTypeIndex,
13972         createInfo.blockSize != 0 ? createInfo.blockSize : preferredBlockSize,
13973         createInfo.minBlockCount,
13974         createInfo.maxBlockCount,
13975         (createInfo.flags& VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT) != 0 ? 1 : hAllocator->GetBufferImageGranularity(),
13976         createInfo.blockSize != 0, // explicitBlockSize
13977         createInfo.flags & VMA_POOL_CREATE_ALGORITHM_MASK, // algorithm
13978         createInfo.priority,
13979         VMA_MAX(hAllocator->GetMemoryTypeMinAlignment(createInfo.memoryTypeIndex), createInfo.minAllocationAlignment),
13980         createInfo.pMemoryAllocateNext),
13981     m_Id(0),
13982     m_Name(VMA_NULL) {}
13983 
~VmaPool_T()13984 VmaPool_T::~VmaPool_T()
13985 {
13986     VMA_ASSERT(m_PrevPool == VMA_NULL && m_NextPool == VMA_NULL);
13987 }
13988 
SetName(const char * pName)13989 void VmaPool_T::SetName(const char* pName)
13990 {
13991     const VkAllocationCallbacks* allocs = m_BlockVector.GetAllocator()->GetAllocationCallbacks();
13992     VmaFreeString(allocs, m_Name);
13993 
13994     if (pName != VMA_NULL)
13995     {
13996         m_Name = VmaCreateStringCopy(allocs, pName);
13997     }
13998     else
13999     {
14000         m_Name = VMA_NULL;
14001     }
14002 }
14003 #endif // _VMA_POOL_T_FUNCTIONS
14004 
14005 #ifndef _VMA_ALLOCATOR_T_FUNCTIONS
VmaAllocator_T(const VmaAllocatorCreateInfo * pCreateInfo)14006 VmaAllocator_T::VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo) :
14007     m_UseMutex((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT) == 0),
14008     m_VulkanApiVersion(pCreateInfo->vulkanApiVersion != 0 ? pCreateInfo->vulkanApiVersion : VK_API_VERSION_1_0),
14009     m_UseKhrDedicatedAllocation((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT) != 0),
14010     m_UseKhrBindMemory2((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT) != 0),
14011     m_UseExtMemoryBudget((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT) != 0),
14012     m_UseAmdDeviceCoherentMemory((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT) != 0),
14013     m_UseKhrBufferDeviceAddress((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT) != 0),
14014     m_UseExtMemoryPriority((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT) != 0),
14015     m_hDevice(pCreateInfo->device),
14016     m_hInstance(pCreateInfo->instance),
14017     m_AllocationCallbacksSpecified(pCreateInfo->pAllocationCallbacks != VMA_NULL),
14018     m_AllocationCallbacks(pCreateInfo->pAllocationCallbacks ?
14019         *pCreateInfo->pAllocationCallbacks : VmaEmptyAllocationCallbacks),
14020     m_AllocationObjectAllocator(&m_AllocationCallbacks),
14021     m_HeapSizeLimitMask(0),
14022     m_DeviceMemoryCount(0),
14023     m_PreferredLargeHeapBlockSize(0),
14024     m_PhysicalDevice(pCreateInfo->physicalDevice),
14025     m_GpuDefragmentationMemoryTypeBits(UINT32_MAX),
14026     m_NextPoolId(0),
14027     m_GlobalMemoryTypeBits(UINT32_MAX)
14028 {
14029     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
14030     {
14031         m_UseKhrDedicatedAllocation = false;
14032         m_UseKhrBindMemory2 = false;
14033     }
14034 
14035     if(VMA_DEBUG_DETECT_CORRUPTION)
14036     {
14037         // Needs to be multiply of uint32_t size because we are going to write VMA_CORRUPTION_DETECTION_MAGIC_VALUE to it.
14038         VMA_ASSERT(VMA_DEBUG_MARGIN % sizeof(uint32_t) == 0);
14039     }
14040 
14041     VMA_ASSERT(pCreateInfo->physicalDevice && pCreateInfo->device && pCreateInfo->instance);
14042 
14043     if(m_VulkanApiVersion < VK_MAKE_VERSION(1, 1, 0))
14044     {
14045 #if !(VMA_DEDICATED_ALLOCATION)
14046         if((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT) != 0)
14047         {
14048             VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT set but required extensions are disabled by preprocessor macros.");
14049         }
14050 #endif
14051 #if !(VMA_BIND_MEMORY2)
14052         if((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT) != 0)
14053         {
14054             VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT set but required extension is disabled by preprocessor macros.");
14055         }
14056 #endif
14057     }
14058 #if !(VMA_MEMORY_BUDGET)
14059     if((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT) != 0)
14060     {
14061         VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT set but required extension is disabled by preprocessor macros.");
14062     }
14063 #endif
14064 #if !(VMA_BUFFER_DEVICE_ADDRESS)
14065     if(m_UseKhrBufferDeviceAddress)
14066     {
14067         VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT is set but required extension or Vulkan 1.2 is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
14068     }
14069 #endif
14070 #if VMA_VULKAN_VERSION < 1002000
14071     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 2, 0))
14072     {
14073         VMA_ASSERT(0 && "vulkanApiVersion >= VK_API_VERSION_1_2 but required Vulkan version is disabled by preprocessor macros.");
14074     }
14075 #endif
14076 #if VMA_VULKAN_VERSION < 1001000
14077     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
14078     {
14079         VMA_ASSERT(0 && "vulkanApiVersion >= VK_API_VERSION_1_1 but required Vulkan version is disabled by preprocessor macros.");
14080     }
14081 #endif
14082 #if !(VMA_MEMORY_PRIORITY)
14083     if(m_UseExtMemoryPriority)
14084     {
14085         VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT is set but required extension is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
14086     }
14087 #endif
14088 
14089     memset(&m_DeviceMemoryCallbacks, 0 ,sizeof(m_DeviceMemoryCallbacks));
14090     memset(&m_PhysicalDeviceProperties, 0, sizeof(m_PhysicalDeviceProperties));
14091     memset(&m_MemProps, 0, sizeof(m_MemProps));
14092 
14093     memset(&m_pBlockVectors, 0, sizeof(m_pBlockVectors));
14094     memset(&m_VulkanFunctions, 0, sizeof(m_VulkanFunctions));
14095 
14096 #if VMA_EXTERNAL_MEMORY
14097     memset(&m_TypeExternalMemoryHandleTypes, 0, sizeof(m_TypeExternalMemoryHandleTypes));
14098 #endif // #if VMA_EXTERNAL_MEMORY
14099 
14100     if(pCreateInfo->pDeviceMemoryCallbacks != VMA_NULL)
14101     {
14102         m_DeviceMemoryCallbacks.pUserData = pCreateInfo->pDeviceMemoryCallbacks->pUserData;
14103         m_DeviceMemoryCallbacks.pfnAllocate = pCreateInfo->pDeviceMemoryCallbacks->pfnAllocate;
14104         m_DeviceMemoryCallbacks.pfnFree = pCreateInfo->pDeviceMemoryCallbacks->pfnFree;
14105     }
14106 
14107     ImportVulkanFunctions(pCreateInfo->pVulkanFunctions);
14108 
14109     (*m_VulkanFunctions.vkGetPhysicalDeviceProperties)(m_PhysicalDevice, &m_PhysicalDeviceProperties);
14110     (*m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties)(m_PhysicalDevice, &m_MemProps);
14111 
14112     VMA_ASSERT(VmaIsPow2(VMA_MIN_ALIGNMENT));
14113     VMA_ASSERT(VmaIsPow2(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY));
14114     VMA_ASSERT(VmaIsPow2(m_PhysicalDeviceProperties.limits.bufferImageGranularity));
14115     VMA_ASSERT(VmaIsPow2(m_PhysicalDeviceProperties.limits.nonCoherentAtomSize));
14116 
14117     m_PreferredLargeHeapBlockSize = (pCreateInfo->preferredLargeHeapBlockSize != 0) ?
14118         pCreateInfo->preferredLargeHeapBlockSize : static_cast<VkDeviceSize>(VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE);
14119 
14120     m_GlobalMemoryTypeBits = CalculateGlobalMemoryTypeBits();
14121 
14122 #if VMA_EXTERNAL_MEMORY
14123     if(pCreateInfo->pTypeExternalMemoryHandleTypes != VMA_NULL)
14124     {
14125         memcpy(m_TypeExternalMemoryHandleTypes, pCreateInfo->pTypeExternalMemoryHandleTypes,
14126             sizeof(VkExternalMemoryHandleTypeFlagsKHR) * GetMemoryTypeCount());
14127     }
14128 #endif // #if VMA_EXTERNAL_MEMORY
14129 
14130     if(pCreateInfo->pHeapSizeLimit != VMA_NULL)
14131     {
14132         for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
14133         {
14134             const VkDeviceSize limit = pCreateInfo->pHeapSizeLimit[heapIndex];
14135             if(limit != VK_WHOLE_SIZE)
14136             {
14137                 m_HeapSizeLimitMask |= 1u << heapIndex;
14138                 if(limit < m_MemProps.memoryHeaps[heapIndex].size)
14139                 {
14140                     m_MemProps.memoryHeaps[heapIndex].size = limit;
14141                 }
14142             }
14143         }
14144     }
14145 
14146     for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
14147     {
14148         // Create only supported types
14149         if((m_GlobalMemoryTypeBits & (1u << memTypeIndex)) != 0)
14150         {
14151             const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(memTypeIndex);
14152             m_pBlockVectors[memTypeIndex] = vma_new(this, VmaBlockVector)(
14153                 this,
14154                 VK_NULL_HANDLE, // hParentPool
14155                 memTypeIndex,
14156                 preferredBlockSize,
14157                 0,
14158                 SIZE_MAX,
14159                 GetBufferImageGranularity(),
14160                 false, // explicitBlockSize
14161                 0, // algorithm
14162                 0.5f, // priority (0.5 is the default per Vulkan spec)
14163                 GetMemoryTypeMinAlignment(memTypeIndex), // minAllocationAlignment
14164                 VMA_NULL); // // pMemoryAllocateNext
14165             // No need to call m_pBlockVectors[memTypeIndex][blockVectorTypeIndex]->CreateMinBlocks here,
14166             // becase minBlockCount is 0.
14167         }
14168     }
14169 }
14170 
Init(const VmaAllocatorCreateInfo * pCreateInfo)14171 VkResult VmaAllocator_T::Init(const VmaAllocatorCreateInfo* pCreateInfo)
14172 {
14173     VkResult res = VK_SUCCESS;
14174 
14175 #if VMA_MEMORY_BUDGET
14176     if(m_UseExtMemoryBudget)
14177     {
14178         UpdateVulkanBudget();
14179     }
14180 #endif // #if VMA_MEMORY_BUDGET
14181 
14182     return res;
14183 }
14184 
~VmaAllocator_T()14185 VmaAllocator_T::~VmaAllocator_T()
14186 {
14187     VMA_ASSERT(m_Pools.IsEmpty());
14188 
14189     for(size_t memTypeIndex = GetMemoryTypeCount(); memTypeIndex--; )
14190     {
14191         vma_delete(this, m_pBlockVectors[memTypeIndex]);
14192     }
14193 }
14194 
ImportVulkanFunctions(const VmaVulkanFunctions * pVulkanFunctions)14195 void VmaAllocator_T::ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions)
14196 {
14197 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
14198     ImportVulkanFunctions_Static();
14199 #endif
14200 
14201     if(pVulkanFunctions != VMA_NULL)
14202     {
14203         ImportVulkanFunctions_Custom(pVulkanFunctions);
14204     }
14205 
14206 #if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
14207     ImportVulkanFunctions_Dynamic();
14208 #endif
14209 
14210     ValidateVulkanFunctions();
14211 }
14212 
14213 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
14214 
ImportVulkanFunctions_Static()14215 void VmaAllocator_T::ImportVulkanFunctions_Static()
14216 {
14217     // Vulkan 1.0
14218     m_VulkanFunctions.vkGetInstanceProcAddr = (PFN_vkGetInstanceProcAddr)vkGetInstanceProcAddr;
14219     m_VulkanFunctions.vkGetDeviceProcAddr = (PFN_vkGetDeviceProcAddr)vkGetDeviceProcAddr;
14220     m_VulkanFunctions.vkGetPhysicalDeviceProperties = (PFN_vkGetPhysicalDeviceProperties)vkGetPhysicalDeviceProperties;
14221     m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties = (PFN_vkGetPhysicalDeviceMemoryProperties)vkGetPhysicalDeviceMemoryProperties;
14222     m_VulkanFunctions.vkAllocateMemory = (PFN_vkAllocateMemory)vkAllocateMemory;
14223     m_VulkanFunctions.vkFreeMemory = (PFN_vkFreeMemory)vkFreeMemory;
14224     m_VulkanFunctions.vkMapMemory = (PFN_vkMapMemory)vkMapMemory;
14225     m_VulkanFunctions.vkUnmapMemory = (PFN_vkUnmapMemory)vkUnmapMemory;
14226     m_VulkanFunctions.vkFlushMappedMemoryRanges = (PFN_vkFlushMappedMemoryRanges)vkFlushMappedMemoryRanges;
14227     m_VulkanFunctions.vkInvalidateMappedMemoryRanges = (PFN_vkInvalidateMappedMemoryRanges)vkInvalidateMappedMemoryRanges;
14228     m_VulkanFunctions.vkBindBufferMemory = (PFN_vkBindBufferMemory)vkBindBufferMemory;
14229     m_VulkanFunctions.vkBindImageMemory = (PFN_vkBindImageMemory)vkBindImageMemory;
14230     m_VulkanFunctions.vkGetBufferMemoryRequirements = (PFN_vkGetBufferMemoryRequirements)vkGetBufferMemoryRequirements;
14231     m_VulkanFunctions.vkGetImageMemoryRequirements = (PFN_vkGetImageMemoryRequirements)vkGetImageMemoryRequirements;
14232     m_VulkanFunctions.vkCreateBuffer = (PFN_vkCreateBuffer)vkCreateBuffer;
14233     m_VulkanFunctions.vkDestroyBuffer = (PFN_vkDestroyBuffer)vkDestroyBuffer;
14234     m_VulkanFunctions.vkCreateImage = (PFN_vkCreateImage)vkCreateImage;
14235     m_VulkanFunctions.vkDestroyImage = (PFN_vkDestroyImage)vkDestroyImage;
14236     m_VulkanFunctions.vkCmdCopyBuffer = (PFN_vkCmdCopyBuffer)vkCmdCopyBuffer;
14237 
14238     // Vulkan 1.1
14239 #if VMA_VULKAN_VERSION >= 1001000
14240     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
14241     {
14242         m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR = (PFN_vkGetBufferMemoryRequirements2)vkGetBufferMemoryRequirements2;
14243         m_VulkanFunctions.vkGetImageMemoryRequirements2KHR = (PFN_vkGetImageMemoryRequirements2)vkGetImageMemoryRequirements2;
14244         m_VulkanFunctions.vkBindBufferMemory2KHR = (PFN_vkBindBufferMemory2)vkBindBufferMemory2;
14245         m_VulkanFunctions.vkBindImageMemory2KHR = (PFN_vkBindImageMemory2)vkBindImageMemory2;
14246         m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties2KHR = (PFN_vkGetPhysicalDeviceMemoryProperties2)vkGetPhysicalDeviceMemoryProperties2;
14247     }
14248 #endif
14249 
14250 #if VMA_VULKAN_VERSION >= 1003000
14251     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 3, 0))
14252     {
14253         m_VulkanFunctions.vkGetDeviceBufferMemoryRequirements = (PFN_vkGetDeviceBufferMemoryRequirements)vkGetDeviceBufferMemoryRequirements;
14254         m_VulkanFunctions.vkGetDeviceImageMemoryRequirements = (PFN_vkGetDeviceImageMemoryRequirements)vkGetDeviceImageMemoryRequirements;
14255     }
14256 #endif
14257 }
14258 
14259 #endif // VMA_STATIC_VULKAN_FUNCTIONS == 1
14260 
ImportVulkanFunctions_Custom(const VmaVulkanFunctions * pVulkanFunctions)14261 void VmaAllocator_T::ImportVulkanFunctions_Custom(const VmaVulkanFunctions* pVulkanFunctions)
14262 {
14263     VMA_ASSERT(pVulkanFunctions != VMA_NULL);
14264 
14265 #define VMA_COPY_IF_NOT_NULL(funcName) \
14266     if(pVulkanFunctions->funcName != VMA_NULL) m_VulkanFunctions.funcName = pVulkanFunctions->funcName;
14267 
14268     VMA_COPY_IF_NOT_NULL(vkGetInstanceProcAddr);
14269     VMA_COPY_IF_NOT_NULL(vkGetDeviceProcAddr);
14270     VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceProperties);
14271     VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties);
14272     VMA_COPY_IF_NOT_NULL(vkAllocateMemory);
14273     VMA_COPY_IF_NOT_NULL(vkFreeMemory);
14274     VMA_COPY_IF_NOT_NULL(vkMapMemory);
14275     VMA_COPY_IF_NOT_NULL(vkUnmapMemory);
14276     VMA_COPY_IF_NOT_NULL(vkFlushMappedMemoryRanges);
14277     VMA_COPY_IF_NOT_NULL(vkInvalidateMappedMemoryRanges);
14278     VMA_COPY_IF_NOT_NULL(vkBindBufferMemory);
14279     VMA_COPY_IF_NOT_NULL(vkBindImageMemory);
14280     VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements);
14281     VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements);
14282     VMA_COPY_IF_NOT_NULL(vkCreateBuffer);
14283     VMA_COPY_IF_NOT_NULL(vkDestroyBuffer);
14284     VMA_COPY_IF_NOT_NULL(vkCreateImage);
14285     VMA_COPY_IF_NOT_NULL(vkDestroyImage);
14286     VMA_COPY_IF_NOT_NULL(vkCmdCopyBuffer);
14287 
14288 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
14289     VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements2KHR);
14290     VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements2KHR);
14291 #endif
14292 
14293 #if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
14294     VMA_COPY_IF_NOT_NULL(vkBindBufferMemory2KHR);
14295     VMA_COPY_IF_NOT_NULL(vkBindImageMemory2KHR);
14296 #endif
14297 
14298 #if VMA_MEMORY_BUDGET
14299     VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties2KHR);
14300 #endif
14301 
14302 #if VMA_VULKAN_VERSION >= 1003000
14303     VMA_COPY_IF_NOT_NULL(vkGetDeviceBufferMemoryRequirements);
14304     VMA_COPY_IF_NOT_NULL(vkGetDeviceImageMemoryRequirements);
14305 #endif
14306 
14307 #undef VMA_COPY_IF_NOT_NULL
14308 }
14309 
14310 #if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
14311 
ImportVulkanFunctions_Dynamic()14312 void VmaAllocator_T::ImportVulkanFunctions_Dynamic()
14313 {
14314     VMA_ASSERT(m_VulkanFunctions.vkGetInstanceProcAddr && m_VulkanFunctions.vkGetDeviceProcAddr &&
14315         "To use VMA_DYNAMIC_VULKAN_FUNCTIONS in new versions of VMA you now have to pass "
14316         "VmaVulkanFunctions::vkGetInstanceProcAddr and vkGetDeviceProcAddr as VmaAllocatorCreateInfo::pVulkanFunctions. "
14317         "Other members can be null.");
14318 
14319 #define VMA_FETCH_INSTANCE_FUNC(memberName, functionPointerType, functionNameString) \
14320     if(m_VulkanFunctions.memberName == VMA_NULL) \
14321         m_VulkanFunctions.memberName = \
14322             (functionPointerType)m_VulkanFunctions.vkGetInstanceProcAddr(m_hInstance, functionNameString);
14323 #define VMA_FETCH_DEVICE_FUNC(memberName, functionPointerType, functionNameString) \
14324     if(m_VulkanFunctions.memberName == VMA_NULL) \
14325         m_VulkanFunctions.memberName = \
14326             (functionPointerType)m_VulkanFunctions.vkGetDeviceProcAddr(m_hDevice, functionNameString);
14327 
14328     VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceProperties, PFN_vkGetPhysicalDeviceProperties, "vkGetPhysicalDeviceProperties");
14329     VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties, PFN_vkGetPhysicalDeviceMemoryProperties, "vkGetPhysicalDeviceMemoryProperties");
14330     VMA_FETCH_DEVICE_FUNC(vkAllocateMemory, PFN_vkAllocateMemory, "vkAllocateMemory");
14331     VMA_FETCH_DEVICE_FUNC(vkFreeMemory, PFN_vkFreeMemory, "vkFreeMemory");
14332     VMA_FETCH_DEVICE_FUNC(vkMapMemory, PFN_vkMapMemory, "vkMapMemory");
14333     VMA_FETCH_DEVICE_FUNC(vkUnmapMemory, PFN_vkUnmapMemory, "vkUnmapMemory");
14334     VMA_FETCH_DEVICE_FUNC(vkFlushMappedMemoryRanges, PFN_vkFlushMappedMemoryRanges, "vkFlushMappedMemoryRanges");
14335     VMA_FETCH_DEVICE_FUNC(vkInvalidateMappedMemoryRanges, PFN_vkInvalidateMappedMemoryRanges, "vkInvalidateMappedMemoryRanges");
14336     VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory, PFN_vkBindBufferMemory, "vkBindBufferMemory");
14337     VMA_FETCH_DEVICE_FUNC(vkBindImageMemory, PFN_vkBindImageMemory, "vkBindImageMemory");
14338     VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements, PFN_vkGetBufferMemoryRequirements, "vkGetBufferMemoryRequirements");
14339     VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements, PFN_vkGetImageMemoryRequirements, "vkGetImageMemoryRequirements");
14340     VMA_FETCH_DEVICE_FUNC(vkCreateBuffer, PFN_vkCreateBuffer, "vkCreateBuffer");
14341     VMA_FETCH_DEVICE_FUNC(vkDestroyBuffer, PFN_vkDestroyBuffer, "vkDestroyBuffer");
14342     VMA_FETCH_DEVICE_FUNC(vkCreateImage, PFN_vkCreateImage, "vkCreateImage");
14343     VMA_FETCH_DEVICE_FUNC(vkDestroyImage, PFN_vkDestroyImage, "vkDestroyImage");
14344     VMA_FETCH_DEVICE_FUNC(vkCmdCopyBuffer, PFN_vkCmdCopyBuffer, "vkCmdCopyBuffer");
14345 
14346 #if VMA_VULKAN_VERSION >= 1001000
14347     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
14348     {
14349         VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements2KHR, PFN_vkGetBufferMemoryRequirements2, "vkGetBufferMemoryRequirements2");
14350         VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements2KHR, PFN_vkGetImageMemoryRequirements2, "vkGetImageMemoryRequirements2");
14351         VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory2KHR, PFN_vkBindBufferMemory2, "vkBindBufferMemory2");
14352         VMA_FETCH_DEVICE_FUNC(vkBindImageMemory2KHR, PFN_vkBindImageMemory2, "vkBindImageMemory2");
14353         VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2, "vkGetPhysicalDeviceMemoryProperties2");
14354     }
14355 #endif
14356 
14357 #if VMA_DEDICATED_ALLOCATION
14358     if(m_UseKhrDedicatedAllocation)
14359     {
14360         VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements2KHR, PFN_vkGetBufferMemoryRequirements2KHR, "vkGetBufferMemoryRequirements2KHR");
14361         VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements2KHR, PFN_vkGetImageMemoryRequirements2KHR, "vkGetImageMemoryRequirements2KHR");
14362     }
14363 #endif
14364 
14365 #if VMA_BIND_MEMORY2
14366     if(m_UseKhrBindMemory2)
14367     {
14368         VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory2KHR, PFN_vkBindBufferMemory2KHR, "vkBindBufferMemory2KHR");
14369         VMA_FETCH_DEVICE_FUNC(vkBindImageMemory2KHR, PFN_vkBindImageMemory2KHR, "vkBindImageMemory2KHR");
14370     }
14371 #endif // #if VMA_BIND_MEMORY2
14372 
14373 #if VMA_MEMORY_BUDGET
14374     if(m_UseExtMemoryBudget)
14375     {
14376         VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2KHR, "vkGetPhysicalDeviceMemoryProperties2KHR");
14377     }
14378 #endif // #if VMA_MEMORY_BUDGET
14379 
14380 #if VMA_VULKAN_VERSION >= 1003000
14381     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 3, 0))
14382     {
14383         VMA_FETCH_DEVICE_FUNC(vkGetDeviceBufferMemoryRequirements, PFN_vkGetDeviceBufferMemoryRequirements, "vkGetDeviceBufferMemoryRequirements");
14384         VMA_FETCH_DEVICE_FUNC(vkGetDeviceImageMemoryRequirements, PFN_vkGetDeviceImageMemoryRequirements, "vkGetDeviceImageMemoryRequirements");
14385     }
14386 #endif
14387 
14388 #undef VMA_FETCH_DEVICE_FUNC
14389 #undef VMA_FETCH_INSTANCE_FUNC
14390 }
14391 
14392 #endif // VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
14393 
ValidateVulkanFunctions()14394 void VmaAllocator_T::ValidateVulkanFunctions()
14395 {
14396     VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceProperties != VMA_NULL);
14397     VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties != VMA_NULL);
14398     VMA_ASSERT(m_VulkanFunctions.vkAllocateMemory != VMA_NULL);
14399     VMA_ASSERT(m_VulkanFunctions.vkFreeMemory != VMA_NULL);
14400     VMA_ASSERT(m_VulkanFunctions.vkMapMemory != VMA_NULL);
14401     VMA_ASSERT(m_VulkanFunctions.vkUnmapMemory != VMA_NULL);
14402     VMA_ASSERT(m_VulkanFunctions.vkFlushMappedMemoryRanges != VMA_NULL);
14403     VMA_ASSERT(m_VulkanFunctions.vkInvalidateMappedMemoryRanges != VMA_NULL);
14404     VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory != VMA_NULL);
14405     VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory != VMA_NULL);
14406     VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements != VMA_NULL);
14407     VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements != VMA_NULL);
14408     VMA_ASSERT(m_VulkanFunctions.vkCreateBuffer != VMA_NULL);
14409     VMA_ASSERT(m_VulkanFunctions.vkDestroyBuffer != VMA_NULL);
14410     VMA_ASSERT(m_VulkanFunctions.vkCreateImage != VMA_NULL);
14411     VMA_ASSERT(m_VulkanFunctions.vkDestroyImage != VMA_NULL);
14412     VMA_ASSERT(m_VulkanFunctions.vkCmdCopyBuffer != VMA_NULL);
14413 
14414 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
14415     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0) || m_UseKhrDedicatedAllocation)
14416     {
14417         VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR != VMA_NULL);
14418         VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements2KHR != VMA_NULL);
14419     }
14420 #endif
14421 
14422 #if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
14423     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0) || m_UseKhrBindMemory2)
14424     {
14425         VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory2KHR != VMA_NULL);
14426         VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory2KHR != VMA_NULL);
14427     }
14428 #endif
14429 
14430 #if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
14431     if(m_UseExtMemoryBudget || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
14432     {
14433         VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties2KHR != VMA_NULL);
14434     }
14435 #endif
14436 
14437 #if VMA_VULKAN_VERSION >= 1003000
14438     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 3, 0))
14439     {
14440         VMA_ASSERT(m_VulkanFunctions.vkGetDeviceBufferMemoryRequirements != VMA_NULL);
14441         VMA_ASSERT(m_VulkanFunctions.vkGetDeviceImageMemoryRequirements != VMA_NULL);
14442     }
14443 #endif
14444 }
14445 
CalcPreferredBlockSize(uint32_t memTypeIndex)14446 VkDeviceSize VmaAllocator_T::CalcPreferredBlockSize(uint32_t memTypeIndex)
14447 {
14448     const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
14449     const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
14450     const bool isSmallHeap = heapSize <= VMA_SMALL_HEAP_MAX_SIZE;
14451     return VmaAlignUp(isSmallHeap ? (heapSize / 8) : m_PreferredLargeHeapBlockSize, (VkDeviceSize)32);
14452 }
14453 
AllocateMemoryOfType(VmaPool pool,VkDeviceSize size,VkDeviceSize alignment,bool dedicatedPreferred,VkBuffer dedicatedBuffer,VkImage dedicatedImage,VkFlags dedicatedBufferImageUsage,const VmaAllocationCreateInfo & createInfo,uint32_t memTypeIndex,VmaSuballocationType suballocType,VmaDedicatedAllocationList & dedicatedAllocations,VmaBlockVector & blockVector,size_t allocationCount,VmaAllocation * pAllocations)14454 VkResult VmaAllocator_T::AllocateMemoryOfType(
14455     VmaPool pool,
14456     VkDeviceSize size,
14457     VkDeviceSize alignment,
14458     bool dedicatedPreferred,
14459     VkBuffer dedicatedBuffer,
14460     VkImage dedicatedImage,
14461     VkFlags dedicatedBufferImageUsage,
14462     const VmaAllocationCreateInfo& createInfo,
14463     uint32_t memTypeIndex,
14464     VmaSuballocationType suballocType,
14465     VmaDedicatedAllocationList& dedicatedAllocations,
14466     VmaBlockVector& blockVector,
14467     size_t allocationCount,
14468     VmaAllocation* pAllocations)
14469 {
14470     VMA_ASSERT(pAllocations != VMA_NULL);
14471     VMA_DEBUG_LOG("  AllocateMemory: MemoryTypeIndex=%u, AllocationCount=%zu, Size=%llu", memTypeIndex, allocationCount, size);
14472 
14473     VmaAllocationCreateInfo finalCreateInfo = createInfo;
14474     VkResult res = CalcMemTypeParams(
14475         finalCreateInfo,
14476         memTypeIndex,
14477         size,
14478         allocationCount);
14479     if(res != VK_SUCCESS)
14480         return res;
14481 
14482     if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0)
14483     {
14484         return AllocateDedicatedMemory(
14485             pool,
14486             size,
14487             suballocType,
14488             dedicatedAllocations,
14489             memTypeIndex,
14490             (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
14491             (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
14492             (finalCreateInfo.flags &
14493                 (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0,
14494             (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT) != 0,
14495             finalCreateInfo.pUserData,
14496             finalCreateInfo.priority,
14497             dedicatedBuffer,
14498             dedicatedImage,
14499             dedicatedBufferImageUsage,
14500             allocationCount,
14501             pAllocations,
14502             blockVector.GetAllocationNextPtr());
14503     }
14504     else
14505     {
14506         const bool canAllocateDedicated =
14507             (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0 &&
14508             (pool == VK_NULL_HANDLE || !blockVector.HasExplicitBlockSize());
14509 
14510         if(canAllocateDedicated)
14511         {
14512             // Heuristics: Allocate dedicated memory if requested size if greater than half of preferred block size.
14513             if(size > blockVector.GetPreferredBlockSize() / 2)
14514             {
14515                 dedicatedPreferred = true;
14516             }
14517             // Protection against creating each allocation as dedicated when we reach or exceed heap size/budget,
14518             // which can quickly deplete maxMemoryAllocationCount: Don't prefer dedicated allocations when above
14519             // 3/4 of the maximum allocation count.
14520             if(m_DeviceMemoryCount.load() > m_PhysicalDeviceProperties.limits.maxMemoryAllocationCount * 3 / 4)
14521             {
14522                 dedicatedPreferred = false;
14523             }
14524 
14525             if(dedicatedPreferred)
14526             {
14527                 res = AllocateDedicatedMemory(
14528                     pool,
14529                     size,
14530                     suballocType,
14531                     dedicatedAllocations,
14532                     memTypeIndex,
14533                     (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
14534                     (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
14535                     (finalCreateInfo.flags &
14536                         (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0,
14537                     (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT) != 0,
14538                     finalCreateInfo.pUserData,
14539                     finalCreateInfo.priority,
14540                     dedicatedBuffer,
14541                     dedicatedImage,
14542                     dedicatedBufferImageUsage,
14543                     allocationCount,
14544                     pAllocations,
14545                     blockVector.GetAllocationNextPtr());
14546                 if(res == VK_SUCCESS)
14547                 {
14548                     // Succeeded: AllocateDedicatedMemory function already filld pMemory, nothing more to do here.
14549                     VMA_DEBUG_LOG("    Allocated as DedicatedMemory");
14550                     return VK_SUCCESS;
14551                 }
14552             }
14553         }
14554 
14555         res = blockVector.Allocate(
14556             size,
14557             alignment,
14558             finalCreateInfo,
14559             suballocType,
14560             allocationCount,
14561             pAllocations);
14562         if(res == VK_SUCCESS)
14563             return VK_SUCCESS;
14564 
14565         // Try dedicated memory.
14566         if(canAllocateDedicated && !dedicatedPreferred)
14567         {
14568             res = AllocateDedicatedMemory(
14569                 pool,
14570                 size,
14571                 suballocType,
14572                 dedicatedAllocations,
14573                 memTypeIndex,
14574                 (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
14575                 (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
14576                 (finalCreateInfo.flags &
14577                     (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0,
14578                 (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT) != 0,
14579                 finalCreateInfo.pUserData,
14580                 finalCreateInfo.priority,
14581                 dedicatedBuffer,
14582                 dedicatedImage,
14583                 dedicatedBufferImageUsage,
14584                 allocationCount,
14585                 pAllocations,
14586                 blockVector.GetAllocationNextPtr());
14587             if(res == VK_SUCCESS)
14588             {
14589                 // Succeeded: AllocateDedicatedMemory function already filld pMemory, nothing more to do here.
14590                 VMA_DEBUG_LOG("    Allocated as DedicatedMemory");
14591                 return VK_SUCCESS;
14592             }
14593         }
14594         // Everything failed: Return error code.
14595         VMA_DEBUG_LOG("    vkAllocateMemory FAILED");
14596         return res;
14597     }
14598 }
14599 
AllocateDedicatedMemory(VmaPool pool,VkDeviceSize size,VmaSuballocationType suballocType,VmaDedicatedAllocationList & dedicatedAllocations,uint32_t memTypeIndex,bool map,bool isUserDataString,bool isMappingAllowed,bool canAliasMemory,void * pUserData,float priority,VkBuffer dedicatedBuffer,VkImage dedicatedImage,VkFlags dedicatedBufferImageUsage,size_t allocationCount,VmaAllocation * pAllocations,const void * pNextChain)14600 VkResult VmaAllocator_T::AllocateDedicatedMemory(
14601     VmaPool pool,
14602     VkDeviceSize size,
14603     VmaSuballocationType suballocType,
14604     VmaDedicatedAllocationList& dedicatedAllocations,
14605     uint32_t memTypeIndex,
14606     bool map,
14607     bool isUserDataString,
14608     bool isMappingAllowed,
14609     bool canAliasMemory,
14610     void* pUserData,
14611     float priority,
14612     VkBuffer dedicatedBuffer,
14613     VkImage dedicatedImage,
14614     VkFlags dedicatedBufferImageUsage,
14615     size_t allocationCount,
14616     VmaAllocation* pAllocations,
14617     const void* pNextChain)
14618 {
14619     VMA_ASSERT(allocationCount > 0 && pAllocations);
14620 
14621     VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
14622     allocInfo.memoryTypeIndex = memTypeIndex;
14623     allocInfo.allocationSize = size;
14624     allocInfo.pNext = pNextChain;
14625 
14626 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
14627     VkMemoryDedicatedAllocateInfoKHR dedicatedAllocInfo = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO_KHR };
14628     if(!canAliasMemory)
14629     {
14630         if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
14631         {
14632             if(dedicatedBuffer != VK_NULL_HANDLE)
14633             {
14634                 VMA_ASSERT(dedicatedImage == VK_NULL_HANDLE);
14635                 dedicatedAllocInfo.buffer = dedicatedBuffer;
14636                 VmaPnextChainPushFront(&allocInfo, &dedicatedAllocInfo);
14637             }
14638             else if(dedicatedImage != VK_NULL_HANDLE)
14639             {
14640                 dedicatedAllocInfo.image = dedicatedImage;
14641                 VmaPnextChainPushFront(&allocInfo, &dedicatedAllocInfo);
14642             }
14643         }
14644     }
14645 #endif // #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
14646 
14647 #if VMA_BUFFER_DEVICE_ADDRESS
14648     VkMemoryAllocateFlagsInfoKHR allocFlagsInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_FLAGS_INFO_KHR };
14649     if(m_UseKhrBufferDeviceAddress)
14650     {
14651         bool canContainBufferWithDeviceAddress = true;
14652         if(dedicatedBuffer != VK_NULL_HANDLE)
14653         {
14654             canContainBufferWithDeviceAddress = dedicatedBufferImageUsage == UINT32_MAX || // Usage flags unknown
14655                 (dedicatedBufferImageUsage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_EXT) != 0;
14656         }
14657         else if(dedicatedImage != VK_NULL_HANDLE)
14658         {
14659             canContainBufferWithDeviceAddress = false;
14660         }
14661         if(canContainBufferWithDeviceAddress)
14662         {
14663             allocFlagsInfo.flags = VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT_KHR;
14664             VmaPnextChainPushFront(&allocInfo, &allocFlagsInfo);
14665         }
14666     }
14667 #endif // #if VMA_BUFFER_DEVICE_ADDRESS
14668 
14669 #if VMA_MEMORY_PRIORITY
14670     VkMemoryPriorityAllocateInfoEXT priorityInfo = { VK_STRUCTURE_TYPE_MEMORY_PRIORITY_ALLOCATE_INFO_EXT };
14671     if(m_UseExtMemoryPriority)
14672     {
14673         VMA_ASSERT(priority >= 0.f && priority <= 1.f);
14674         priorityInfo.priority = priority;
14675         VmaPnextChainPushFront(&allocInfo, &priorityInfo);
14676     }
14677 #endif // #if VMA_MEMORY_PRIORITY
14678 
14679 #if VMA_EXTERNAL_MEMORY
14680     // Attach VkExportMemoryAllocateInfoKHR if necessary.
14681     VkExportMemoryAllocateInfoKHR exportMemoryAllocInfo = { VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO_KHR };
14682     exportMemoryAllocInfo.handleTypes = GetExternalMemoryHandleTypeFlags(memTypeIndex);
14683     if(exportMemoryAllocInfo.handleTypes != 0)
14684     {
14685         VmaPnextChainPushFront(&allocInfo, &exportMemoryAllocInfo);
14686     }
14687 #endif // #if VMA_EXTERNAL_MEMORY
14688 
14689     size_t allocIndex;
14690     VkResult res = VK_SUCCESS;
14691     for(allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
14692     {
14693         res = AllocateDedicatedMemoryPage(
14694             pool,
14695             size,
14696             suballocType,
14697             memTypeIndex,
14698             allocInfo,
14699             map,
14700             isUserDataString,
14701             isMappingAllowed,
14702             pUserData,
14703             pAllocations + allocIndex);
14704         if(res != VK_SUCCESS)
14705         {
14706             break;
14707         }
14708     }
14709 
14710     if(res == VK_SUCCESS)
14711     {
14712         for (allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
14713         {
14714             dedicatedAllocations.Register(pAllocations[allocIndex]);
14715         }
14716         VMA_DEBUG_LOG("    Allocated DedicatedMemory Count=%zu, MemoryTypeIndex=#%u", allocationCount, memTypeIndex);
14717     }
14718     else
14719     {
14720         // Free all already created allocations.
14721         while(allocIndex--)
14722         {
14723             VmaAllocation currAlloc = pAllocations[allocIndex];
14724             VkDeviceMemory hMemory = currAlloc->GetMemory();
14725 
14726             /*
14727             There is no need to call this, because Vulkan spec allows to skip vkUnmapMemory
14728             before vkFreeMemory.
14729 
14730             if(currAlloc->GetMappedData() != VMA_NULL)
14731             {
14732                 (*m_VulkanFunctions.vkUnmapMemory)(m_hDevice, hMemory);
14733             }
14734             */
14735 
14736             FreeVulkanMemory(memTypeIndex, currAlloc->GetSize(), hMemory);
14737             m_Budget.RemoveAllocation(MemoryTypeIndexToHeapIndex(memTypeIndex), currAlloc->GetSize());
14738             m_AllocationObjectAllocator.Free(currAlloc);
14739         }
14740 
14741         memset(pAllocations, 0, sizeof(VmaAllocation) * allocationCount);
14742     }
14743 
14744     return res;
14745 }
14746 
AllocateDedicatedMemoryPage(VmaPool pool,VkDeviceSize size,VmaSuballocationType suballocType,uint32_t memTypeIndex,const VkMemoryAllocateInfo & allocInfo,bool map,bool isUserDataString,bool isMappingAllowed,void * pUserData,VmaAllocation * pAllocation)14747 VkResult VmaAllocator_T::AllocateDedicatedMemoryPage(
14748     VmaPool pool,
14749     VkDeviceSize size,
14750     VmaSuballocationType suballocType,
14751     uint32_t memTypeIndex,
14752     const VkMemoryAllocateInfo& allocInfo,
14753     bool map,
14754     bool isUserDataString,
14755     bool isMappingAllowed,
14756     void* pUserData,
14757     VmaAllocation* pAllocation)
14758 {
14759     VkDeviceMemory hMemory = VK_NULL_HANDLE;
14760     VkResult res = AllocateVulkanMemory(&allocInfo, &hMemory);
14761     if(res < 0)
14762     {
14763         VMA_DEBUG_LOG("    vkAllocateMemory FAILED");
14764         return res;
14765     }
14766 
14767     void* pMappedData = VMA_NULL;
14768     if(map)
14769     {
14770         res = (*m_VulkanFunctions.vkMapMemory)(
14771             m_hDevice,
14772             hMemory,
14773             0,
14774             VK_WHOLE_SIZE,
14775             0,
14776             &pMappedData);
14777         if(res < 0)
14778         {
14779             VMA_DEBUG_LOG("    vkMapMemory FAILED");
14780             FreeVulkanMemory(memTypeIndex, size, hMemory);
14781             return res;
14782         }
14783     }
14784 
14785     *pAllocation = m_AllocationObjectAllocator.Allocate(isMappingAllowed);
14786     (*pAllocation)->InitDedicatedAllocation(pool, memTypeIndex, hMemory, suballocType, pMappedData, size);
14787     if (isUserDataString)
14788         (*pAllocation)->SetName(this, (const char*)pUserData);
14789     else
14790         (*pAllocation)->SetUserData(this, pUserData);
14791     m_Budget.AddAllocation(MemoryTypeIndexToHeapIndex(memTypeIndex), size);
14792     if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
14793     {
14794         FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
14795     }
14796 
14797     return VK_SUCCESS;
14798 }
14799 
GetBufferMemoryRequirements(VkBuffer hBuffer,VkMemoryRequirements & memReq,bool & requiresDedicatedAllocation,bool & prefersDedicatedAllocation)14800 void VmaAllocator_T::GetBufferMemoryRequirements(
14801     VkBuffer hBuffer,
14802     VkMemoryRequirements& memReq,
14803     bool& requiresDedicatedAllocation,
14804     bool& prefersDedicatedAllocation) const
14805 {
14806 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
14807     if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
14808     {
14809         VkBufferMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_REQUIREMENTS_INFO_2_KHR };
14810         memReqInfo.buffer = hBuffer;
14811 
14812         VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
14813 
14814         VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
14815         VmaPnextChainPushFront(&memReq2, &memDedicatedReq);
14816 
14817         (*m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
14818 
14819         memReq = memReq2.memoryRequirements;
14820         requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
14821         prefersDedicatedAllocation  = (memDedicatedReq.prefersDedicatedAllocation  != VK_FALSE);
14822     }
14823     else
14824 #endif // #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
14825     {
14826         (*m_VulkanFunctions.vkGetBufferMemoryRequirements)(m_hDevice, hBuffer, &memReq);
14827         requiresDedicatedAllocation = false;
14828         prefersDedicatedAllocation  = false;
14829     }
14830 }
14831 
GetImageMemoryRequirements(VkImage hImage,VkMemoryRequirements & memReq,bool & requiresDedicatedAllocation,bool & prefersDedicatedAllocation)14832 void VmaAllocator_T::GetImageMemoryRequirements(
14833     VkImage hImage,
14834     VkMemoryRequirements& memReq,
14835     bool& requiresDedicatedAllocation,
14836     bool& prefersDedicatedAllocation) const
14837 {
14838 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
14839     if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
14840     {
14841         VkImageMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_IMAGE_MEMORY_REQUIREMENTS_INFO_2_KHR };
14842         memReqInfo.image = hImage;
14843 
14844         VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
14845 
14846         VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
14847         VmaPnextChainPushFront(&memReq2, &memDedicatedReq);
14848 
14849         (*m_VulkanFunctions.vkGetImageMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
14850 
14851         memReq = memReq2.memoryRequirements;
14852         requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
14853         prefersDedicatedAllocation  = (memDedicatedReq.prefersDedicatedAllocation  != VK_FALSE);
14854     }
14855     else
14856 #endif // #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
14857     {
14858         (*m_VulkanFunctions.vkGetImageMemoryRequirements)(m_hDevice, hImage, &memReq);
14859         requiresDedicatedAllocation = false;
14860         prefersDedicatedAllocation  = false;
14861     }
14862 }
14863 
FindMemoryTypeIndex(uint32_t memoryTypeBits,const VmaAllocationCreateInfo * pAllocationCreateInfo,VkFlags bufImgUsage,uint32_t * pMemoryTypeIndex)14864 VkResult VmaAllocator_T::FindMemoryTypeIndex(
14865     uint32_t memoryTypeBits,
14866     const VmaAllocationCreateInfo* pAllocationCreateInfo,
14867     VkFlags bufImgUsage,
14868     uint32_t* pMemoryTypeIndex) const
14869 {
14870     memoryTypeBits &= GetGlobalMemoryTypeBits();
14871 
14872     if(pAllocationCreateInfo->memoryTypeBits != 0)
14873     {
14874         memoryTypeBits &= pAllocationCreateInfo->memoryTypeBits;
14875     }
14876 
14877     VkMemoryPropertyFlags requiredFlags = 0, preferredFlags = 0, notPreferredFlags = 0;
14878     if(!FindMemoryPreferences(
14879         IsIntegratedGpu(),
14880         *pAllocationCreateInfo,
14881         bufImgUsage,
14882         requiredFlags, preferredFlags, notPreferredFlags))
14883     {
14884         return VK_ERROR_FEATURE_NOT_PRESENT;
14885     }
14886 
14887     *pMemoryTypeIndex = UINT32_MAX;
14888     uint32_t minCost = UINT32_MAX;
14889     for(uint32_t memTypeIndex = 0, memTypeBit = 1;
14890         memTypeIndex < GetMemoryTypeCount();
14891         ++memTypeIndex, memTypeBit <<= 1)
14892     {
14893         // This memory type is acceptable according to memoryTypeBits bitmask.
14894         if((memTypeBit & memoryTypeBits) != 0)
14895         {
14896             const VkMemoryPropertyFlags currFlags =
14897                 m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
14898             // This memory type contains requiredFlags.
14899             if((requiredFlags & ~currFlags) == 0)
14900             {
14901                 // Calculate cost as number of bits from preferredFlags not present in this memory type.
14902                 uint32_t currCost = VMA_COUNT_BITS_SET(preferredFlags & ~currFlags) +
14903                     VMA_COUNT_BITS_SET(currFlags & notPreferredFlags);
14904                 // Remember memory type with lowest cost.
14905                 if(currCost < minCost)
14906                 {
14907                     *pMemoryTypeIndex = memTypeIndex;
14908                     if(currCost == 0)
14909                     {
14910                         return VK_SUCCESS;
14911                     }
14912                     minCost = currCost;
14913                 }
14914             }
14915         }
14916     }
14917     return (*pMemoryTypeIndex != UINT32_MAX) ? VK_SUCCESS : VK_ERROR_FEATURE_NOT_PRESENT;
14918 }
14919 
CalcMemTypeParams(VmaAllocationCreateInfo & inoutCreateInfo,uint32_t memTypeIndex,VkDeviceSize size,size_t allocationCount)14920 VkResult VmaAllocator_T::CalcMemTypeParams(
14921     VmaAllocationCreateInfo& inoutCreateInfo,
14922     uint32_t memTypeIndex,
14923     VkDeviceSize size,
14924     size_t allocationCount)
14925 {
14926     // If memory type is not HOST_VISIBLE, disable MAPPED.
14927     if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
14928         (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
14929     {
14930         inoutCreateInfo.flags &= ~VMA_ALLOCATION_CREATE_MAPPED_BIT;
14931     }
14932 
14933     if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0 &&
14934         (inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT) != 0)
14935     {
14936         const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
14937         VmaBudget heapBudget = {};
14938         GetHeapBudgets(&heapBudget, heapIndex, 1);
14939         if(heapBudget.usage + size * allocationCount > heapBudget.budget)
14940         {
14941             return VK_ERROR_OUT_OF_DEVICE_MEMORY;
14942         }
14943     }
14944     return VK_SUCCESS;
14945 }
14946 
CalcAllocationParams(VmaAllocationCreateInfo & inoutCreateInfo,bool dedicatedRequired,bool dedicatedPreferred)14947 VkResult VmaAllocator_T::CalcAllocationParams(
14948     VmaAllocationCreateInfo& inoutCreateInfo,
14949     bool dedicatedRequired,
14950     bool dedicatedPreferred)
14951 {
14952     VMA_ASSERT((inoutCreateInfo.flags &
14953         (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) !=
14954         (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT) &&
14955         "Specifying both flags VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT and VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT is incorrect.");
14956     VMA_ASSERT((((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT) == 0 ||
14957         (inoutCreateInfo.flags & (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0)) &&
14958         "Specifying VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT requires also VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.");
14959     if(inoutCreateInfo.usage == VMA_MEMORY_USAGE_AUTO || inoutCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE || inoutCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_HOST)
14960     {
14961         if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0)
14962         {
14963             VMA_ASSERT((inoutCreateInfo.flags & (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0 &&
14964                 "When using VMA_ALLOCATION_CREATE_MAPPED_BIT and usage = VMA_MEMORY_USAGE_AUTO*, you must also specify VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.");
14965         }
14966     }
14967 
14968     // If memory is lazily allocated, it should be always dedicated.
14969     if(dedicatedRequired ||
14970         inoutCreateInfo.usage == VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED)
14971     {
14972         inoutCreateInfo.flags |= VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
14973     }
14974 
14975     if(inoutCreateInfo.pool != VK_NULL_HANDLE)
14976     {
14977         if(inoutCreateInfo.pool->m_BlockVector.HasExplicitBlockSize() &&
14978             (inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0)
14979         {
14980             VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT while current custom pool doesn't support dedicated allocations.");
14981             return VK_ERROR_FEATURE_NOT_PRESENT;
14982         }
14983         inoutCreateInfo.priority = inoutCreateInfo.pool->m_BlockVector.GetPriority();
14984     }
14985 
14986     if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0 &&
14987         (inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
14988     {
14989         VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT together with VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT makes no sense.");
14990         return VK_ERROR_FEATURE_NOT_PRESENT;
14991     }
14992 
14993     if(VMA_DEBUG_ALWAYS_DEDICATED_MEMORY &&
14994         (inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
14995     {
14996         inoutCreateInfo.flags |= VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
14997     }
14998 
14999     // Non-auto USAGE values imply HOST_ACCESS flags.
15000     // And so does VMA_MEMORY_USAGE_UNKNOWN because it is used with custom pools.
15001     // Which specific flag is used doesn't matter. They change things only when used with VMA_MEMORY_USAGE_AUTO*.
15002     // Otherwise they just protect from assert on mapping.
15003     if(inoutCreateInfo.usage != VMA_MEMORY_USAGE_AUTO &&
15004         inoutCreateInfo.usage != VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE &&
15005         inoutCreateInfo.usage != VMA_MEMORY_USAGE_AUTO_PREFER_HOST)
15006     {
15007         if((inoutCreateInfo.flags & (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) == 0)
15008         {
15009             inoutCreateInfo.flags |= VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT;
15010         }
15011     }
15012 
15013     return VK_SUCCESS;
15014 }
15015 
AllocateMemory(const VkMemoryRequirements & vkMemReq,bool requiresDedicatedAllocation,bool prefersDedicatedAllocation,VkBuffer dedicatedBuffer,VkImage dedicatedImage,VkFlags dedicatedBufferImageUsage,const VmaAllocationCreateInfo & createInfo,VmaSuballocationType suballocType,size_t allocationCount,VmaAllocation * pAllocations)15016 VkResult VmaAllocator_T::AllocateMemory(
15017     const VkMemoryRequirements& vkMemReq,
15018     bool requiresDedicatedAllocation,
15019     bool prefersDedicatedAllocation,
15020     VkBuffer dedicatedBuffer,
15021     VkImage dedicatedImage,
15022     VkFlags dedicatedBufferImageUsage,
15023     const VmaAllocationCreateInfo& createInfo,
15024     VmaSuballocationType suballocType,
15025     size_t allocationCount,
15026     VmaAllocation* pAllocations)
15027 {
15028     memset(pAllocations, 0, sizeof(VmaAllocation) * allocationCount);
15029 
15030     VMA_ASSERT(VmaIsPow2(vkMemReq.alignment));
15031 
15032     if(vkMemReq.size == 0)
15033     {
15034         return VK_ERROR_INITIALIZATION_FAILED;
15035     }
15036 
15037     VmaAllocationCreateInfo createInfoFinal = createInfo;
15038     VkResult res = CalcAllocationParams(createInfoFinal, requiresDedicatedAllocation, prefersDedicatedAllocation);
15039     if(res != VK_SUCCESS)
15040         return res;
15041 
15042     if(createInfoFinal.pool != VK_NULL_HANDLE)
15043     {
15044         VmaBlockVector& blockVector = createInfoFinal.pool->m_BlockVector;
15045         return AllocateMemoryOfType(
15046             createInfoFinal.pool,
15047             vkMemReq.size,
15048             vkMemReq.alignment,
15049             prefersDedicatedAllocation,
15050             dedicatedBuffer,
15051             dedicatedImage,
15052             dedicatedBufferImageUsage,
15053             createInfoFinal,
15054             blockVector.GetMemoryTypeIndex(),
15055             suballocType,
15056             createInfoFinal.pool->m_DedicatedAllocations,
15057             blockVector,
15058             allocationCount,
15059             pAllocations);
15060     }
15061     else
15062     {
15063         // Bit mask of memory Vulkan types acceptable for this allocation.
15064         uint32_t memoryTypeBits = vkMemReq.memoryTypeBits;
15065         uint32_t memTypeIndex = UINT32_MAX;
15066         res = FindMemoryTypeIndex(memoryTypeBits, &createInfoFinal, dedicatedBufferImageUsage, &memTypeIndex);
15067         // Can't find any single memory type matching requirements. res is VK_ERROR_FEATURE_NOT_PRESENT.
15068         if(res != VK_SUCCESS)
15069             return res;
15070         do
15071         {
15072             VmaBlockVector* blockVector = m_pBlockVectors[memTypeIndex];
15073             VMA_ASSERT(blockVector && "Trying to use unsupported memory type!");
15074             res = AllocateMemoryOfType(
15075                 VK_NULL_HANDLE,
15076                 vkMemReq.size,
15077                 vkMemReq.alignment,
15078                 requiresDedicatedAllocation || prefersDedicatedAllocation,
15079                 dedicatedBuffer,
15080                 dedicatedImage,
15081                 dedicatedBufferImageUsage,
15082                 createInfoFinal,
15083                 memTypeIndex,
15084                 suballocType,
15085                 m_DedicatedAllocations[memTypeIndex],
15086                 *blockVector,
15087                 allocationCount,
15088                 pAllocations);
15089             // Allocation succeeded
15090             if(res == VK_SUCCESS)
15091                 return VK_SUCCESS;
15092 
15093             // Remove old memTypeIndex from list of possibilities.
15094             memoryTypeBits &= ~(1u << memTypeIndex);
15095             // Find alternative memTypeIndex.
15096             res = FindMemoryTypeIndex(memoryTypeBits, &createInfoFinal, dedicatedBufferImageUsage, &memTypeIndex);
15097         } while(res == VK_SUCCESS);
15098 
15099         // No other matching memory type index could be found.
15100         // Not returning res, which is VK_ERROR_FEATURE_NOT_PRESENT, because we already failed to allocate once.
15101         return VK_ERROR_OUT_OF_DEVICE_MEMORY;
15102     }
15103 }
15104 
FreeMemory(size_t allocationCount,const VmaAllocation * pAllocations)15105 void VmaAllocator_T::FreeMemory(
15106     size_t allocationCount,
15107     const VmaAllocation* pAllocations)
15108 {
15109     VMA_ASSERT(pAllocations);
15110 
15111     for(size_t allocIndex = allocationCount; allocIndex--; )
15112     {
15113         VmaAllocation allocation = pAllocations[allocIndex];
15114 
15115         if(allocation != VK_NULL_HANDLE)
15116         {
15117             if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
15118             {
15119                 FillAllocation(allocation, VMA_ALLOCATION_FILL_PATTERN_DESTROYED);
15120             }
15121 
15122             allocation->FreeName(this);
15123 
15124             switch(allocation->GetType())
15125             {
15126             case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
15127                 {
15128                     VmaBlockVector* pBlockVector = VMA_NULL;
15129                     VmaPool hPool = allocation->GetParentPool();
15130                     if(hPool != VK_NULL_HANDLE)
15131                     {
15132                         pBlockVector = &hPool->m_BlockVector;
15133                     }
15134                     else
15135                     {
15136                         const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
15137                         pBlockVector = m_pBlockVectors[memTypeIndex];
15138                         VMA_ASSERT(pBlockVector && "Trying to free memory of unsupported type!");
15139                     }
15140                     pBlockVector->Free(allocation);
15141                 }
15142                 break;
15143             case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
15144                 FreeDedicatedMemory(allocation);
15145                 break;
15146             default:
15147                 VMA_ASSERT(0);
15148             }
15149         }
15150     }
15151 }
15152 
CalculateStatistics(VmaTotalStatistics * pStats)15153 void VmaAllocator_T::CalculateStatistics(VmaTotalStatistics* pStats)
15154 {
15155     // Initialize.
15156     VmaClearDetailedStatistics(pStats->total);
15157     for(uint32_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i)
15158         VmaClearDetailedStatistics(pStats->memoryType[i]);
15159     for(uint32_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
15160         VmaClearDetailedStatistics(pStats->memoryHeap[i]);
15161 
15162     // Process default pools.
15163     for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
15164     {
15165         VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
15166         if (pBlockVector != VMA_NULL)
15167             pBlockVector->AddDetailedStatistics(pStats->memoryType[memTypeIndex]);
15168     }
15169 
15170     // Process custom pools.
15171     {
15172         VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
15173         for(VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(pool))
15174         {
15175             VmaBlockVector& blockVector = pool->m_BlockVector;
15176             const uint32_t memTypeIndex = blockVector.GetMemoryTypeIndex();
15177             blockVector.AddDetailedStatistics(pStats->memoryType[memTypeIndex]);
15178             pool->m_DedicatedAllocations.AddDetailedStatistics(pStats->memoryType[memTypeIndex]);
15179         }
15180     }
15181 
15182     // Process dedicated allocations.
15183     for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
15184     {
15185         m_DedicatedAllocations[memTypeIndex].AddDetailedStatistics(pStats->memoryType[memTypeIndex]);
15186     }
15187 
15188     // Sum from memory types to memory heaps.
15189     for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
15190     {
15191         const uint32_t memHeapIndex = m_MemProps.memoryTypes[memTypeIndex].heapIndex;
15192         VmaAddDetailedStatistics(pStats->memoryHeap[memHeapIndex], pStats->memoryType[memTypeIndex]);
15193     }
15194 
15195     // Sum from memory heaps to total.
15196     for(uint32_t memHeapIndex = 0; memHeapIndex < GetMemoryHeapCount(); ++memHeapIndex)
15197         VmaAddDetailedStatistics(pStats->total, pStats->memoryHeap[memHeapIndex]);
15198 
15199     VMA_ASSERT(pStats->total.statistics.allocationCount == 0 ||
15200         pStats->total.allocationSizeMax >= pStats->total.allocationSizeMin);
15201     VMA_ASSERT(pStats->total.unusedRangeCount == 0 ||
15202         pStats->total.unusedRangeSizeMax >= pStats->total.unusedRangeSizeMin);
15203 }
15204 
GetHeapBudgets(VmaBudget * outBudgets,uint32_t firstHeap,uint32_t heapCount)15205 void VmaAllocator_T::GetHeapBudgets(VmaBudget* outBudgets, uint32_t firstHeap, uint32_t heapCount)
15206 {
15207 #if VMA_MEMORY_BUDGET
15208     if(m_UseExtMemoryBudget)
15209     {
15210         if(m_Budget.m_OperationsSinceBudgetFetch < 30)
15211         {
15212             VmaMutexLockRead lockRead(m_Budget.m_BudgetMutex, m_UseMutex);
15213             for(uint32_t i = 0; i < heapCount; ++i, ++outBudgets)
15214             {
15215                 const uint32_t heapIndex = firstHeap + i;
15216 
15217                 outBudgets->statistics.blockCount = m_Budget.m_BlockCount[heapIndex];
15218                 outBudgets->statistics.allocationCount = m_Budget.m_AllocationCount[heapIndex];
15219                 outBudgets->statistics.blockBytes = m_Budget.m_BlockBytes[heapIndex];
15220                 outBudgets->statistics.allocationBytes = m_Budget.m_AllocationBytes[heapIndex];
15221 
15222                 if(m_Budget.m_VulkanUsage[heapIndex] + outBudgets->statistics.blockBytes > m_Budget.m_BlockBytesAtBudgetFetch[heapIndex])
15223                 {
15224                     outBudgets->usage = m_Budget.m_VulkanUsage[heapIndex] +
15225                         outBudgets->statistics.blockBytes - m_Budget.m_BlockBytesAtBudgetFetch[heapIndex];
15226                 }
15227                 else
15228                 {
15229                     outBudgets->usage = 0;
15230                 }
15231 
15232                 // Have to take MIN with heap size because explicit HeapSizeLimit is included in it.
15233                 outBudgets->budget = VMA_MIN(
15234                     m_Budget.m_VulkanBudget[heapIndex], m_MemProps.memoryHeaps[heapIndex].size);
15235             }
15236         }
15237         else
15238         {
15239             UpdateVulkanBudget(); // Outside of mutex lock
15240             GetHeapBudgets(outBudgets, firstHeap, heapCount); // Recursion
15241         }
15242     }
15243     else
15244 #endif
15245     {
15246         for(uint32_t i = 0; i < heapCount; ++i, ++outBudgets)
15247         {
15248             const uint32_t heapIndex = firstHeap + i;
15249 
15250             outBudgets->statistics.blockCount = m_Budget.m_BlockCount[heapIndex];
15251             outBudgets->statistics.allocationCount = m_Budget.m_AllocationCount[heapIndex];
15252             outBudgets->statistics.blockBytes = m_Budget.m_BlockBytes[heapIndex];
15253             outBudgets->statistics.allocationBytes = m_Budget.m_AllocationBytes[heapIndex];
15254 
15255             outBudgets->usage = outBudgets->statistics.blockBytes;
15256             outBudgets->budget = m_MemProps.memoryHeaps[heapIndex].size * 8 / 10; // 80% heuristics.
15257         }
15258     }
15259 }
15260 
GetAllocationInfo(VmaAllocation hAllocation,VmaAllocationInfo * pAllocationInfo)15261 void VmaAllocator_T::GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo)
15262 {
15263     pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
15264     pAllocationInfo->deviceMemory = hAllocation->GetMemory();
15265     pAllocationInfo->offset = hAllocation->GetOffset();
15266     pAllocationInfo->size = hAllocation->GetSize();
15267     pAllocationInfo->pMappedData = hAllocation->GetMappedData();
15268     pAllocationInfo->pUserData = hAllocation->GetUserData();
15269     pAllocationInfo->pName = hAllocation->GetName();
15270 }
15271 
CreatePool(const VmaPoolCreateInfo * pCreateInfo,VmaPool * pPool)15272 VkResult VmaAllocator_T::CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool)
15273 {
15274     VMA_DEBUG_LOG("  CreatePool: MemoryTypeIndex=%u, flags=%u", pCreateInfo->memoryTypeIndex, pCreateInfo->flags);
15275 
15276     VmaPoolCreateInfo newCreateInfo = *pCreateInfo;
15277 
15278     // Protection against uninitialized new structure member. If garbage data are left there, this pointer dereference would crash.
15279     if(pCreateInfo->pMemoryAllocateNext)
15280     {
15281         VMA_ASSERT(((const VkBaseInStructure*)pCreateInfo->pMemoryAllocateNext)->sType != 0);
15282     }
15283 
15284     if(newCreateInfo.maxBlockCount == 0)
15285     {
15286         newCreateInfo.maxBlockCount = SIZE_MAX;
15287     }
15288     if(newCreateInfo.minBlockCount > newCreateInfo.maxBlockCount)
15289     {
15290         return VK_ERROR_INITIALIZATION_FAILED;
15291     }
15292     // Memory type index out of range or forbidden.
15293     if(pCreateInfo->memoryTypeIndex >= GetMemoryTypeCount() ||
15294         ((1u << pCreateInfo->memoryTypeIndex) & m_GlobalMemoryTypeBits) == 0)
15295     {
15296         return VK_ERROR_FEATURE_NOT_PRESENT;
15297     }
15298     if(newCreateInfo.minAllocationAlignment > 0)
15299     {
15300         VMA_ASSERT(VmaIsPow2(newCreateInfo.minAllocationAlignment));
15301     }
15302 
15303     const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(newCreateInfo.memoryTypeIndex);
15304 
15305     *pPool = vma_new(this, VmaPool_T)(this, newCreateInfo, preferredBlockSize);
15306 
15307     VkResult res = (*pPool)->m_BlockVector.CreateMinBlocks();
15308     if(res != VK_SUCCESS)
15309     {
15310         vma_delete(this, *pPool);
15311         *pPool = VMA_NULL;
15312         return res;
15313     }
15314 
15315     // Add to m_Pools.
15316     {
15317         VmaMutexLockWrite lock(m_PoolsMutex, m_UseMutex);
15318         (*pPool)->SetId(m_NextPoolId++);
15319         m_Pools.PushBack(*pPool);
15320     }
15321 
15322     return VK_SUCCESS;
15323 }
15324 
DestroyPool(VmaPool pool)15325 void VmaAllocator_T::DestroyPool(VmaPool pool)
15326 {
15327     // Remove from m_Pools.
15328     {
15329         VmaMutexLockWrite lock(m_PoolsMutex, m_UseMutex);
15330         m_Pools.Remove(pool);
15331     }
15332 
15333     vma_delete(this, pool);
15334 }
15335 
GetPoolStatistics(VmaPool pool,VmaStatistics * pPoolStats)15336 void VmaAllocator_T::GetPoolStatistics(VmaPool pool, VmaStatistics* pPoolStats)
15337 {
15338     VmaClearStatistics(*pPoolStats);
15339     pool->m_BlockVector.AddStatistics(*pPoolStats);
15340     pool->m_DedicatedAllocations.AddStatistics(*pPoolStats);
15341 }
15342 
CalculatePoolStatistics(VmaPool pool,VmaDetailedStatistics * pPoolStats)15343 void VmaAllocator_T::CalculatePoolStatistics(VmaPool pool, VmaDetailedStatistics* pPoolStats)
15344 {
15345     VmaClearDetailedStatistics(*pPoolStats);
15346     pool->m_BlockVector.AddDetailedStatistics(*pPoolStats);
15347     pool->m_DedicatedAllocations.AddDetailedStatistics(*pPoolStats);
15348 }
15349 
SetCurrentFrameIndex(uint32_t frameIndex)15350 void VmaAllocator_T::SetCurrentFrameIndex(uint32_t frameIndex)
15351 {
15352     m_CurrentFrameIndex.store(frameIndex);
15353 
15354 #if VMA_MEMORY_BUDGET
15355     if(m_UseExtMemoryBudget)
15356     {
15357         UpdateVulkanBudget();
15358     }
15359 #endif // #if VMA_MEMORY_BUDGET
15360 }
15361 
CheckPoolCorruption(VmaPool hPool)15362 VkResult VmaAllocator_T::CheckPoolCorruption(VmaPool hPool)
15363 {
15364     return hPool->m_BlockVector.CheckCorruption();
15365 }
15366 
CheckCorruption(uint32_t memoryTypeBits)15367 VkResult VmaAllocator_T::CheckCorruption(uint32_t memoryTypeBits)
15368 {
15369     VkResult finalRes = VK_ERROR_FEATURE_NOT_PRESENT;
15370 
15371     // Process default pools.
15372     for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
15373     {
15374         VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
15375         if(pBlockVector != VMA_NULL)
15376         {
15377             VkResult localRes = pBlockVector->CheckCorruption();
15378             switch(localRes)
15379             {
15380             case VK_ERROR_FEATURE_NOT_PRESENT:
15381                 break;
15382             case VK_SUCCESS:
15383                 finalRes = VK_SUCCESS;
15384                 break;
15385             default:
15386                 return localRes;
15387             }
15388         }
15389     }
15390 
15391     // Process custom pools.
15392     {
15393         VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
15394         for(VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(pool))
15395         {
15396             if(((1u << pool->m_BlockVector.GetMemoryTypeIndex()) & memoryTypeBits) != 0)
15397             {
15398                 VkResult localRes = pool->m_BlockVector.CheckCorruption();
15399                 switch(localRes)
15400                 {
15401                 case VK_ERROR_FEATURE_NOT_PRESENT:
15402                     break;
15403                 case VK_SUCCESS:
15404                     finalRes = VK_SUCCESS;
15405                     break;
15406                 default:
15407                     return localRes;
15408                 }
15409             }
15410         }
15411     }
15412 
15413     return finalRes;
15414 }
15415 
AllocateVulkanMemory(const VkMemoryAllocateInfo * pAllocateInfo,VkDeviceMemory * pMemory)15416 VkResult VmaAllocator_T::AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory)
15417 {
15418     AtomicTransactionalIncrement<uint32_t> deviceMemoryCountIncrement;
15419     const uint64_t prevDeviceMemoryCount = deviceMemoryCountIncrement.Increment(&m_DeviceMemoryCount);
15420 #if VMA_DEBUG_DONT_EXCEED_MAX_MEMORY_ALLOCATION_COUNT
15421     if(prevDeviceMemoryCount >= m_PhysicalDeviceProperties.limits.maxMemoryAllocationCount)
15422     {
15423         return VK_ERROR_TOO_MANY_OBJECTS;
15424     }
15425 #endif
15426 
15427     const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(pAllocateInfo->memoryTypeIndex);
15428 
15429     // HeapSizeLimit is in effect for this heap.
15430     if((m_HeapSizeLimitMask & (1u << heapIndex)) != 0)
15431     {
15432         const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
15433         VkDeviceSize blockBytes = m_Budget.m_BlockBytes[heapIndex];
15434         for(;;)
15435         {
15436             const VkDeviceSize blockBytesAfterAllocation = blockBytes + pAllocateInfo->allocationSize;
15437             if(blockBytesAfterAllocation > heapSize)
15438             {
15439                 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
15440             }
15441             if(m_Budget.m_BlockBytes[heapIndex].compare_exchange_strong(blockBytes, blockBytesAfterAllocation))
15442             {
15443                 break;
15444             }
15445         }
15446     }
15447     else
15448     {
15449         m_Budget.m_BlockBytes[heapIndex] += pAllocateInfo->allocationSize;
15450     }
15451     ++m_Budget.m_BlockCount[heapIndex];
15452 
15453     // VULKAN CALL vkAllocateMemory.
15454     VkResult res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
15455 
15456     if(res == VK_SUCCESS)
15457     {
15458 #if VMA_MEMORY_BUDGET
15459         ++m_Budget.m_OperationsSinceBudgetFetch;
15460 #endif
15461 
15462         // Informative callback.
15463         if(m_DeviceMemoryCallbacks.pfnAllocate != VMA_NULL)
15464         {
15465             (*m_DeviceMemoryCallbacks.pfnAllocate)(this, pAllocateInfo->memoryTypeIndex, *pMemory, pAllocateInfo->allocationSize, m_DeviceMemoryCallbacks.pUserData);
15466         }
15467 
15468         deviceMemoryCountIncrement.Commit();
15469     }
15470     else
15471     {
15472         --m_Budget.m_BlockCount[heapIndex];
15473         m_Budget.m_BlockBytes[heapIndex] -= pAllocateInfo->allocationSize;
15474     }
15475 
15476     return res;
15477 }
15478 
FreeVulkanMemory(uint32_t memoryType,VkDeviceSize size,VkDeviceMemory hMemory)15479 void VmaAllocator_T::FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory)
15480 {
15481     // Informative callback.
15482     if(m_DeviceMemoryCallbacks.pfnFree != VMA_NULL)
15483     {
15484         (*m_DeviceMemoryCallbacks.pfnFree)(this, memoryType, hMemory, size, m_DeviceMemoryCallbacks.pUserData);
15485     }
15486 
15487     // VULKAN CALL vkFreeMemory.
15488     (*m_VulkanFunctions.vkFreeMemory)(m_hDevice, hMemory, GetAllocationCallbacks());
15489 
15490     const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memoryType);
15491     --m_Budget.m_BlockCount[heapIndex];
15492     m_Budget.m_BlockBytes[heapIndex] -= size;
15493 
15494     --m_DeviceMemoryCount;
15495 }
15496 
BindVulkanBuffer(VkDeviceMemory memory,VkDeviceSize memoryOffset,VkBuffer buffer,const void * pNext)15497 VkResult VmaAllocator_T::BindVulkanBuffer(
15498     VkDeviceMemory memory,
15499     VkDeviceSize memoryOffset,
15500     VkBuffer buffer,
15501     const void* pNext)
15502 {
15503     if(pNext != VMA_NULL)
15504     {
15505 #if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
15506         if((m_UseKhrBindMemory2 || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0)) &&
15507             m_VulkanFunctions.vkBindBufferMemory2KHR != VMA_NULL)
15508         {
15509             VkBindBufferMemoryInfoKHR bindBufferMemoryInfo = { VK_STRUCTURE_TYPE_BIND_BUFFER_MEMORY_INFO_KHR };
15510             bindBufferMemoryInfo.pNext = pNext;
15511             bindBufferMemoryInfo.buffer = buffer;
15512             bindBufferMemoryInfo.memory = memory;
15513             bindBufferMemoryInfo.memoryOffset = memoryOffset;
15514             return (*m_VulkanFunctions.vkBindBufferMemory2KHR)(m_hDevice, 1, &bindBufferMemoryInfo);
15515         }
15516         else
15517 #endif // #if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
15518         {
15519             return VK_ERROR_EXTENSION_NOT_PRESENT;
15520         }
15521     }
15522     else
15523     {
15524         return (*m_VulkanFunctions.vkBindBufferMemory)(m_hDevice, buffer, memory, memoryOffset);
15525     }
15526 }
15527 
BindVulkanImage(VkDeviceMemory memory,VkDeviceSize memoryOffset,VkImage image,const void * pNext)15528 VkResult VmaAllocator_T::BindVulkanImage(
15529     VkDeviceMemory memory,
15530     VkDeviceSize memoryOffset,
15531     VkImage image,
15532     const void* pNext)
15533 {
15534     if(pNext != VMA_NULL)
15535     {
15536 #if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
15537         if((m_UseKhrBindMemory2 || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0)) &&
15538             m_VulkanFunctions.vkBindImageMemory2KHR != VMA_NULL)
15539         {
15540             VkBindImageMemoryInfoKHR bindBufferMemoryInfo = { VK_STRUCTURE_TYPE_BIND_IMAGE_MEMORY_INFO_KHR };
15541             bindBufferMemoryInfo.pNext = pNext;
15542             bindBufferMemoryInfo.image = image;
15543             bindBufferMemoryInfo.memory = memory;
15544             bindBufferMemoryInfo.memoryOffset = memoryOffset;
15545             return (*m_VulkanFunctions.vkBindImageMemory2KHR)(m_hDevice, 1, &bindBufferMemoryInfo);
15546         }
15547         else
15548 #endif // #if VMA_BIND_MEMORY2
15549         {
15550             return VK_ERROR_EXTENSION_NOT_PRESENT;
15551         }
15552     }
15553     else
15554     {
15555         return (*m_VulkanFunctions.vkBindImageMemory)(m_hDevice, image, memory, memoryOffset);
15556     }
15557 }
15558 
Map(VmaAllocation hAllocation,void ** ppData)15559 VkResult VmaAllocator_T::Map(VmaAllocation hAllocation, void** ppData)
15560 {
15561     switch(hAllocation->GetType())
15562     {
15563     case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
15564         {
15565             VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
15566             char *pBytes = VMA_NULL;
15567             VkResult res = pBlock->Map(this, 1, (void**)&pBytes);
15568             if(res == VK_SUCCESS)
15569             {
15570                 *ppData = pBytes + (ptrdiff_t)hAllocation->GetOffset();
15571                 hAllocation->BlockAllocMap();
15572             }
15573             return res;
15574         }
15575     case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
15576         return hAllocation->DedicatedAllocMap(this, ppData);
15577     default:
15578         VMA_ASSERT(0);
15579         return VK_ERROR_MEMORY_MAP_FAILED;
15580     }
15581 }
15582 
Unmap(VmaAllocation hAllocation)15583 void VmaAllocator_T::Unmap(VmaAllocation hAllocation)
15584 {
15585     switch(hAllocation->GetType())
15586     {
15587     case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
15588         {
15589             VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
15590             hAllocation->BlockAllocUnmap();
15591             pBlock->Unmap(this, 1);
15592         }
15593         break;
15594     case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
15595         hAllocation->DedicatedAllocUnmap(this);
15596         break;
15597     default:
15598         VMA_ASSERT(0);
15599     }
15600 }
15601 
BindBufferMemory(VmaAllocation hAllocation,VkDeviceSize allocationLocalOffset,VkBuffer hBuffer,const void * pNext)15602 VkResult VmaAllocator_T::BindBufferMemory(
15603     VmaAllocation hAllocation,
15604     VkDeviceSize allocationLocalOffset,
15605     VkBuffer hBuffer,
15606     const void* pNext)
15607 {
15608     VkResult res = VK_SUCCESS;
15609     switch(hAllocation->GetType())
15610     {
15611     case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
15612         res = BindVulkanBuffer(hAllocation->GetMemory(), allocationLocalOffset, hBuffer, pNext);
15613         break;
15614     case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
15615     {
15616         VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
15617         VMA_ASSERT(pBlock && "Binding buffer to allocation that doesn't belong to any block.");
15618         res = pBlock->BindBufferMemory(this, hAllocation, allocationLocalOffset, hBuffer, pNext);
15619         break;
15620     }
15621     default:
15622         VMA_ASSERT(0);
15623     }
15624     return res;
15625 }
15626 
BindImageMemory(VmaAllocation hAllocation,VkDeviceSize allocationLocalOffset,VkImage hImage,const void * pNext)15627 VkResult VmaAllocator_T::BindImageMemory(
15628     VmaAllocation hAllocation,
15629     VkDeviceSize allocationLocalOffset,
15630     VkImage hImage,
15631     const void* pNext)
15632 {
15633     VkResult res = VK_SUCCESS;
15634     switch(hAllocation->GetType())
15635     {
15636     case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
15637         res = BindVulkanImage(hAllocation->GetMemory(), allocationLocalOffset, hImage, pNext);
15638         break;
15639     case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
15640     {
15641         VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
15642         VMA_ASSERT(pBlock && "Binding image to allocation that doesn't belong to any block.");
15643         res = pBlock->BindImageMemory(this, hAllocation, allocationLocalOffset, hImage, pNext);
15644         break;
15645     }
15646     default:
15647         VMA_ASSERT(0);
15648     }
15649     return res;
15650 }
15651 
FlushOrInvalidateAllocation(VmaAllocation hAllocation,VkDeviceSize offset,VkDeviceSize size,VMA_CACHE_OPERATION op)15652 VkResult VmaAllocator_T::FlushOrInvalidateAllocation(
15653     VmaAllocation hAllocation,
15654     VkDeviceSize offset, VkDeviceSize size,
15655     VMA_CACHE_OPERATION op)
15656 {
15657     VkResult res = VK_SUCCESS;
15658 
15659     VkMappedMemoryRange memRange = {};
15660     if(GetFlushOrInvalidateRange(hAllocation, offset, size, memRange))
15661     {
15662         switch(op)
15663         {
15664         case VMA_CACHE_FLUSH:
15665             res = (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, 1, &memRange);
15666             break;
15667         case VMA_CACHE_INVALIDATE:
15668             res = (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, 1, &memRange);
15669             break;
15670         default:
15671             VMA_ASSERT(0);
15672         }
15673     }
15674     // else: Just ignore this call.
15675     return res;
15676 }
15677 
FlushOrInvalidateAllocations(uint32_t allocationCount,const VmaAllocation * allocations,const VkDeviceSize * offsets,const VkDeviceSize * sizes,VMA_CACHE_OPERATION op)15678 VkResult VmaAllocator_T::FlushOrInvalidateAllocations(
15679     uint32_t allocationCount,
15680     const VmaAllocation* allocations,
15681     const VkDeviceSize* offsets, const VkDeviceSize* sizes,
15682     VMA_CACHE_OPERATION op)
15683 {
15684     typedef VmaStlAllocator<VkMappedMemoryRange> RangeAllocator;
15685     typedef VmaSmallVector<VkMappedMemoryRange, RangeAllocator, 16> RangeVector;
15686     RangeVector ranges = RangeVector(RangeAllocator(GetAllocationCallbacks()));
15687 
15688     for(uint32_t allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
15689     {
15690         const VmaAllocation alloc = allocations[allocIndex];
15691         const VkDeviceSize offset = offsets != VMA_NULL ? offsets[allocIndex] : 0;
15692         const VkDeviceSize size = sizes != VMA_NULL ? sizes[allocIndex] : VK_WHOLE_SIZE;
15693         VkMappedMemoryRange newRange;
15694         if(GetFlushOrInvalidateRange(alloc, offset, size, newRange))
15695         {
15696             ranges.push_back(newRange);
15697         }
15698     }
15699 
15700     VkResult res = VK_SUCCESS;
15701     if(!ranges.empty())
15702     {
15703         switch(op)
15704         {
15705         case VMA_CACHE_FLUSH:
15706             res = (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, (uint32_t)ranges.size(), ranges.data());
15707             break;
15708         case VMA_CACHE_INVALIDATE:
15709             res = (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, (uint32_t)ranges.size(), ranges.data());
15710             break;
15711         default:
15712             VMA_ASSERT(0);
15713         }
15714     }
15715     // else: Just ignore this call.
15716     return res;
15717 }
15718 
FreeDedicatedMemory(const VmaAllocation allocation)15719 void VmaAllocator_T::FreeDedicatedMemory(const VmaAllocation allocation)
15720 {
15721     VMA_ASSERT(allocation && allocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
15722 
15723     const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
15724     VmaPool parentPool = allocation->GetParentPool();
15725     if(parentPool == VK_NULL_HANDLE)
15726     {
15727         // Default pool
15728         m_DedicatedAllocations[memTypeIndex].Unregister(allocation);
15729     }
15730     else
15731     {
15732         // Custom pool
15733         parentPool->m_DedicatedAllocations.Unregister(allocation);
15734     }
15735 
15736     VkDeviceMemory hMemory = allocation->GetMemory();
15737 
15738     /*
15739     There is no need to call this, because Vulkan spec allows to skip vkUnmapMemory
15740     before vkFreeMemory.
15741 
15742     if(allocation->GetMappedData() != VMA_NULL)
15743     {
15744         (*m_VulkanFunctions.vkUnmapMemory)(m_hDevice, hMemory);
15745     }
15746     */
15747 
15748     FreeVulkanMemory(memTypeIndex, allocation->GetSize(), hMemory);
15749 
15750     m_Budget.RemoveAllocation(MemoryTypeIndexToHeapIndex(allocation->GetMemoryTypeIndex()), allocation->GetSize());
15751     m_AllocationObjectAllocator.Free(allocation);
15752 
15753     VMA_DEBUG_LOG("    Freed DedicatedMemory MemoryTypeIndex=%u", memTypeIndex);
15754 }
15755 
CalculateGpuDefragmentationMemoryTypeBits()15756 uint32_t VmaAllocator_T::CalculateGpuDefragmentationMemoryTypeBits() const
15757 {
15758     VkBufferCreateInfo dummyBufCreateInfo;
15759     VmaFillGpuDefragmentationBufferCreateInfo(dummyBufCreateInfo);
15760 
15761     uint32_t memoryTypeBits = 0;
15762 
15763     // Create buffer.
15764     VkBuffer buf = VK_NULL_HANDLE;
15765     VkResult res = (*GetVulkanFunctions().vkCreateBuffer)(
15766         m_hDevice, &dummyBufCreateInfo, GetAllocationCallbacks(), &buf);
15767     if(res == VK_SUCCESS)
15768     {
15769         // Query for supported memory types.
15770         VkMemoryRequirements memReq;
15771         (*GetVulkanFunctions().vkGetBufferMemoryRequirements)(m_hDevice, buf, &memReq);
15772         memoryTypeBits = memReq.memoryTypeBits;
15773 
15774         // Destroy buffer.
15775         (*GetVulkanFunctions().vkDestroyBuffer)(m_hDevice, buf, GetAllocationCallbacks());
15776     }
15777 
15778     return memoryTypeBits;
15779 }
15780 
CalculateGlobalMemoryTypeBits()15781 uint32_t VmaAllocator_T::CalculateGlobalMemoryTypeBits() const
15782 {
15783     // Make sure memory information is already fetched.
15784     VMA_ASSERT(GetMemoryTypeCount() > 0);
15785 
15786     uint32_t memoryTypeBits = UINT32_MAX;
15787 
15788     if(!m_UseAmdDeviceCoherentMemory)
15789     {
15790         // Exclude memory types that have VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD.
15791         for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
15792         {
15793             if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY) != 0)
15794             {
15795                 memoryTypeBits &= ~(1u << memTypeIndex);
15796             }
15797         }
15798     }
15799 
15800     return memoryTypeBits;
15801 }
15802 
GetFlushOrInvalidateRange(VmaAllocation allocation,VkDeviceSize offset,VkDeviceSize size,VkMappedMemoryRange & outRange)15803 bool VmaAllocator_T::GetFlushOrInvalidateRange(
15804     VmaAllocation allocation,
15805     VkDeviceSize offset, VkDeviceSize size,
15806     VkMappedMemoryRange& outRange) const
15807 {
15808     const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
15809     if(size > 0 && IsMemoryTypeNonCoherent(memTypeIndex))
15810     {
15811         const VkDeviceSize nonCoherentAtomSize = m_PhysicalDeviceProperties.limits.nonCoherentAtomSize;
15812         const VkDeviceSize allocationSize = allocation->GetSize();
15813         VMA_ASSERT(offset <= allocationSize);
15814 
15815         outRange.sType = VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE;
15816         outRange.pNext = VMA_NULL;
15817         outRange.memory = allocation->GetMemory();
15818 
15819         switch(allocation->GetType())
15820         {
15821         case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
15822             outRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
15823             if(size == VK_WHOLE_SIZE)
15824             {
15825                 outRange.size = allocationSize - outRange.offset;
15826             }
15827             else
15828             {
15829                 VMA_ASSERT(offset + size <= allocationSize);
15830                 outRange.size = VMA_MIN(
15831                     VmaAlignUp(size + (offset - outRange.offset), nonCoherentAtomSize),
15832                     allocationSize - outRange.offset);
15833             }
15834             break;
15835         case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
15836         {
15837             // 1. Still within this allocation.
15838             outRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
15839             if(size == VK_WHOLE_SIZE)
15840             {
15841                 size = allocationSize - offset;
15842             }
15843             else
15844             {
15845                 VMA_ASSERT(offset + size <= allocationSize);
15846             }
15847             outRange.size = VmaAlignUp(size + (offset - outRange.offset), nonCoherentAtomSize);
15848 
15849             // 2. Adjust to whole block.
15850             const VkDeviceSize allocationOffset = allocation->GetOffset();
15851             VMA_ASSERT(allocationOffset % nonCoherentAtomSize == 0);
15852             const VkDeviceSize blockSize = allocation->GetBlock()->m_pMetadata->GetSize();
15853             outRange.offset += allocationOffset;
15854             outRange.size = VMA_MIN(outRange.size, blockSize - outRange.offset);
15855 
15856             break;
15857         }
15858         default:
15859             VMA_ASSERT(0);
15860         }
15861         return true;
15862     }
15863     return false;
15864 }
15865 
15866 #if VMA_MEMORY_BUDGET
UpdateVulkanBudget()15867 void VmaAllocator_T::UpdateVulkanBudget()
15868 {
15869     VMA_ASSERT(m_UseExtMemoryBudget);
15870 
15871     VkPhysicalDeviceMemoryProperties2KHR memProps = { VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_MEMORY_PROPERTIES_2_KHR };
15872 
15873     VkPhysicalDeviceMemoryBudgetPropertiesEXT budgetProps = { VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_MEMORY_BUDGET_PROPERTIES_EXT };
15874     VmaPnextChainPushFront(&memProps, &budgetProps);
15875 
15876     GetVulkanFunctions().vkGetPhysicalDeviceMemoryProperties2KHR(m_PhysicalDevice, &memProps);
15877 
15878     {
15879         VmaMutexLockWrite lockWrite(m_Budget.m_BudgetMutex, m_UseMutex);
15880 
15881         for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
15882         {
15883             m_Budget.m_VulkanUsage[heapIndex] = budgetProps.heapUsage[heapIndex];
15884             m_Budget.m_VulkanBudget[heapIndex] = budgetProps.heapBudget[heapIndex];
15885             m_Budget.m_BlockBytesAtBudgetFetch[heapIndex] = m_Budget.m_BlockBytes[heapIndex].load();
15886 
15887             // Some bugged drivers return the budget incorrectly, e.g. 0 or much bigger than heap size.
15888             if(m_Budget.m_VulkanBudget[heapIndex] == 0)
15889             {
15890                 m_Budget.m_VulkanBudget[heapIndex] = m_MemProps.memoryHeaps[heapIndex].size * 8 / 10; // 80% heuristics.
15891             }
15892             else if(m_Budget.m_VulkanBudget[heapIndex] > m_MemProps.memoryHeaps[heapIndex].size)
15893             {
15894                 m_Budget.m_VulkanBudget[heapIndex] = m_MemProps.memoryHeaps[heapIndex].size;
15895             }
15896             if(m_Budget.m_VulkanUsage[heapIndex] == 0 && m_Budget.m_BlockBytesAtBudgetFetch[heapIndex] > 0)
15897             {
15898                 m_Budget.m_VulkanUsage[heapIndex] = m_Budget.m_BlockBytesAtBudgetFetch[heapIndex];
15899             }
15900         }
15901         m_Budget.m_OperationsSinceBudgetFetch = 0;
15902     }
15903 }
15904 #endif // VMA_MEMORY_BUDGET
15905 
FillAllocation(const VmaAllocation hAllocation,uint8_t pattern)15906 void VmaAllocator_T::FillAllocation(const VmaAllocation hAllocation, uint8_t pattern)
15907 {
15908     if(VMA_DEBUG_INITIALIZE_ALLOCATIONS &&
15909         (m_MemProps.memoryTypes[hAllocation->GetMemoryTypeIndex()].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
15910     {
15911         void* pData = VMA_NULL;
15912         VkResult res = Map(hAllocation, &pData);
15913         if(res == VK_SUCCESS)
15914         {
15915             memset(pData, (int)pattern, (size_t)hAllocation->GetSize());
15916             FlushOrInvalidateAllocation(hAllocation, 0, VK_WHOLE_SIZE, VMA_CACHE_FLUSH);
15917             Unmap(hAllocation);
15918         }
15919         else
15920         {
15921             VMA_ASSERT(0 && "VMA_DEBUG_INITIALIZE_ALLOCATIONS is enabled, but couldn't map memory to fill allocation.");
15922         }
15923     }
15924 }
15925 
GetGpuDefragmentationMemoryTypeBits()15926 uint32_t VmaAllocator_T::GetGpuDefragmentationMemoryTypeBits()
15927 {
15928     uint32_t memoryTypeBits = m_GpuDefragmentationMemoryTypeBits.load();
15929     if(memoryTypeBits == UINT32_MAX)
15930     {
15931         memoryTypeBits = CalculateGpuDefragmentationMemoryTypeBits();
15932         m_GpuDefragmentationMemoryTypeBits.store(memoryTypeBits);
15933     }
15934     return memoryTypeBits;
15935 }
15936 
15937 #if VMA_STATS_STRING_ENABLED
PrintDetailedMap(VmaJsonWriter & json)15938 void VmaAllocator_T::PrintDetailedMap(VmaJsonWriter& json)
15939 {
15940     json.WriteString("DefaultPools");
15941     json.BeginObject();
15942     {
15943         for (uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
15944         {
15945             VmaBlockVector* pBlockVector = m_pBlockVectors[memTypeIndex];
15946             VmaDedicatedAllocationList& dedicatedAllocList = m_DedicatedAllocations[memTypeIndex];
15947             if (pBlockVector != VMA_NULL)
15948             {
15949                 json.BeginString("Type ");
15950                 json.ContinueString(memTypeIndex);
15951                 json.EndString();
15952                 json.BeginObject();
15953                 {
15954                     json.WriteString("PreferredBlockSize");
15955                     json.WriteNumber(pBlockVector->GetPreferredBlockSize());
15956 
15957                     json.WriteString("Blocks");
15958                     pBlockVector->PrintDetailedMap(json);
15959 
15960                     json.WriteString("DedicatedAllocations");
15961                     dedicatedAllocList.BuildStatsString(json);
15962                 }
15963                 json.EndObject();
15964             }
15965         }
15966     }
15967     json.EndObject();
15968 
15969     json.WriteString("CustomPools");
15970     json.BeginObject();
15971     {
15972         VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
15973         if (!m_Pools.IsEmpty())
15974         {
15975             for (uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
15976             {
15977                 bool displayType = true;
15978                 size_t index = 0;
15979                 for (VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(pool))
15980                 {
15981                     VmaBlockVector& blockVector = pool->m_BlockVector;
15982                     if (blockVector.GetMemoryTypeIndex() == memTypeIndex)
15983                     {
15984                         if (displayType)
15985                         {
15986                             json.BeginString("Type ");
15987                             json.ContinueString(memTypeIndex);
15988                             json.EndString();
15989                             json.BeginArray();
15990                             displayType = false;
15991                         }
15992 
15993                         json.BeginObject();
15994                         {
15995                             json.WriteString("Name");
15996                             json.BeginString();
15997                             json.ContinueString_Size(index++);
15998                             if (pool->GetName())
15999                             {
16000                                 json.ContinueString(" - ");
16001                                 json.ContinueString(pool->GetName());
16002                             }
16003                             json.EndString();
16004 
16005                             json.WriteString("PreferredBlockSize");
16006                             json.WriteNumber(blockVector.GetPreferredBlockSize());
16007 
16008                             json.WriteString("Blocks");
16009                             blockVector.PrintDetailedMap(json);
16010 
16011                             json.WriteString("DedicatedAllocations");
16012                             pool->m_DedicatedAllocations.BuildStatsString(json);
16013                         }
16014                         json.EndObject();
16015                     }
16016                 }
16017 
16018                 if (!displayType)
16019                     json.EndArray();
16020             }
16021         }
16022     }
16023     json.EndObject();
16024 }
16025 #endif // VMA_STATS_STRING_ENABLED
16026 #endif // _VMA_ALLOCATOR_T_FUNCTIONS
16027 
16028 
16029 #ifndef _VMA_PUBLIC_INTERFACE
vmaCreateAllocator(const VmaAllocatorCreateInfo * pCreateInfo,VmaAllocator * pAllocator)16030 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAllocator(
16031     const VmaAllocatorCreateInfo* pCreateInfo,
16032     VmaAllocator* pAllocator)
16033 {
16034     VMA_ASSERT(pCreateInfo && pAllocator);
16035     VMA_ASSERT(pCreateInfo->vulkanApiVersion == 0 ||
16036         (VK_VERSION_MAJOR(pCreateInfo->vulkanApiVersion) == 1 && VK_VERSION_MINOR(pCreateInfo->vulkanApiVersion) <= 3));
16037     VMA_DEBUG_LOG("vmaCreateAllocator");
16038     *pAllocator = vma_new(pCreateInfo->pAllocationCallbacks, VmaAllocator_T)(pCreateInfo);
16039     VkResult result = (*pAllocator)->Init(pCreateInfo);
16040     if(result < 0)
16041     {
16042         vma_delete(pCreateInfo->pAllocationCallbacks, *pAllocator);
16043         *pAllocator = VK_NULL_HANDLE;
16044     }
16045     return result;
16046 }
16047 
vmaDestroyAllocator(VmaAllocator allocator)16048 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyAllocator(
16049     VmaAllocator allocator)
16050 {
16051     if(allocator != VK_NULL_HANDLE)
16052     {
16053         VMA_DEBUG_LOG("vmaDestroyAllocator");
16054         VkAllocationCallbacks allocationCallbacks = allocator->m_AllocationCallbacks; // Have to copy the callbacks when destroying.
16055         vma_delete(&allocationCallbacks, allocator);
16056     }
16057 }
16058 
vmaGetAllocatorInfo(VmaAllocator allocator,VmaAllocatorInfo * pAllocatorInfo)16059 VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocatorInfo(VmaAllocator allocator, VmaAllocatorInfo* pAllocatorInfo)
16060 {
16061     VMA_ASSERT(allocator && pAllocatorInfo);
16062     pAllocatorInfo->instance = allocator->m_hInstance;
16063     pAllocatorInfo->physicalDevice = allocator->GetPhysicalDevice();
16064     pAllocatorInfo->device = allocator->m_hDevice;
16065 }
16066 
vmaGetPhysicalDeviceProperties(VmaAllocator allocator,const VkPhysicalDeviceProperties ** ppPhysicalDeviceProperties)16067 VMA_CALL_PRE void VMA_CALL_POST vmaGetPhysicalDeviceProperties(
16068     VmaAllocator allocator,
16069     const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
16070 {
16071     VMA_ASSERT(allocator && ppPhysicalDeviceProperties);
16072     *ppPhysicalDeviceProperties = &allocator->m_PhysicalDeviceProperties;
16073 }
16074 
vmaGetMemoryProperties(VmaAllocator allocator,const VkPhysicalDeviceMemoryProperties ** ppPhysicalDeviceMemoryProperties)16075 VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryProperties(
16076     VmaAllocator allocator,
16077     const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties)
16078 {
16079     VMA_ASSERT(allocator && ppPhysicalDeviceMemoryProperties);
16080     *ppPhysicalDeviceMemoryProperties = &allocator->m_MemProps;
16081 }
16082 
vmaGetMemoryTypeProperties(VmaAllocator allocator,uint32_t memoryTypeIndex,VkMemoryPropertyFlags * pFlags)16083 VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryTypeProperties(
16084     VmaAllocator allocator,
16085     uint32_t memoryTypeIndex,
16086     VkMemoryPropertyFlags* pFlags)
16087 {
16088     VMA_ASSERT(allocator && pFlags);
16089     VMA_ASSERT(memoryTypeIndex < allocator->GetMemoryTypeCount());
16090     *pFlags = allocator->m_MemProps.memoryTypes[memoryTypeIndex].propertyFlags;
16091 }
16092 
vmaSetCurrentFrameIndex(VmaAllocator allocator,uint32_t frameIndex)16093 VMA_CALL_PRE void VMA_CALL_POST vmaSetCurrentFrameIndex(
16094     VmaAllocator allocator,
16095     uint32_t frameIndex)
16096 {
16097     VMA_ASSERT(allocator);
16098 
16099     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16100 
16101     allocator->SetCurrentFrameIndex(frameIndex);
16102 }
16103 
vmaCalculateStatistics(VmaAllocator allocator,VmaTotalStatistics * pStats)16104 VMA_CALL_PRE void VMA_CALL_POST vmaCalculateStatistics(
16105     VmaAllocator allocator,
16106     VmaTotalStatistics* pStats)
16107 {
16108     VMA_ASSERT(allocator && pStats);
16109     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16110     allocator->CalculateStatistics(pStats);
16111 }
16112 
vmaGetHeapBudgets(VmaAllocator allocator,VmaBudget * pBudgets)16113 VMA_CALL_PRE void VMA_CALL_POST vmaGetHeapBudgets(
16114     VmaAllocator allocator,
16115     VmaBudget* pBudgets)
16116 {
16117     VMA_ASSERT(allocator && pBudgets);
16118     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16119     allocator->GetHeapBudgets(pBudgets, 0, allocator->GetMemoryHeapCount());
16120 }
16121 
16122 #if VMA_STATS_STRING_ENABLED
16123 
vmaBuildStatsString(VmaAllocator allocator,char ** ppStatsString,VkBool32 detailedMap)16124 VMA_CALL_PRE void VMA_CALL_POST vmaBuildStatsString(
16125     VmaAllocator allocator,
16126     char** ppStatsString,
16127     VkBool32 detailedMap)
16128 {
16129     VMA_ASSERT(allocator && ppStatsString);
16130     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16131 
16132     VmaStringBuilder sb(allocator->GetAllocationCallbacks());
16133     {
16134         VmaBudget budgets[VK_MAX_MEMORY_HEAPS];
16135         allocator->GetHeapBudgets(budgets, 0, allocator->GetMemoryHeapCount());
16136 
16137         VmaTotalStatistics stats;
16138         allocator->CalculateStatistics(&stats);
16139 
16140         VmaJsonWriter json(allocator->GetAllocationCallbacks(), sb);
16141         json.BeginObject();
16142         {
16143             json.WriteString("General");
16144             json.BeginObject();
16145             {
16146                 const VkPhysicalDeviceProperties& deviceProperties = allocator->m_PhysicalDeviceProperties;
16147                 const VkPhysicalDeviceMemoryProperties& memoryProperties = allocator->m_MemProps;
16148 
16149                 json.WriteString("API");
16150                 json.WriteString("Vulkan");
16151 
16152                 json.WriteString("apiVersion");
16153                 json.BeginString();
16154                 json.ContinueString(VK_API_VERSION_MAJOR(deviceProperties.apiVersion));
16155                 json.ContinueString(".");
16156                 json.ContinueString(VK_API_VERSION_MINOR(deviceProperties.apiVersion));
16157                 json.ContinueString(".");
16158                 json.ContinueString(VK_API_VERSION_PATCH(deviceProperties.apiVersion));
16159                 json.EndString();
16160 
16161                 json.WriteString("GPU");
16162                 json.WriteString(deviceProperties.deviceName);
16163                 json.WriteString("deviceType");
16164                 json.WriteNumber(static_cast<uint32_t>(deviceProperties.deviceType));
16165 
16166                 json.WriteString("maxMemoryAllocationCount");
16167                 json.WriteNumber(deviceProperties.limits.maxMemoryAllocationCount);
16168                 json.WriteString("bufferImageGranularity");
16169                 json.WriteNumber(deviceProperties.limits.bufferImageGranularity);
16170                 json.WriteString("nonCoherentAtomSize");
16171                 json.WriteNumber(deviceProperties.limits.nonCoherentAtomSize);
16172 
16173                 json.WriteString("memoryHeapCount");
16174                 json.WriteNumber(memoryProperties.memoryHeapCount);
16175                 json.WriteString("memoryTypeCount");
16176                 json.WriteNumber(memoryProperties.memoryTypeCount);
16177             }
16178             json.EndObject();
16179         }
16180         {
16181             json.WriteString("Total");
16182             VmaPrintDetailedStatistics(json, stats.total);
16183         }
16184         {
16185             json.WriteString("MemoryInfo");
16186             json.BeginObject();
16187             {
16188                 for (uint32_t heapIndex = 0; heapIndex < allocator->GetMemoryHeapCount(); ++heapIndex)
16189                 {
16190                     json.BeginString("Heap ");
16191                     json.ContinueString(heapIndex);
16192                     json.EndString();
16193                     json.BeginObject();
16194                     {
16195                         const VkMemoryHeap& heapInfo = allocator->m_MemProps.memoryHeaps[heapIndex];
16196                         json.WriteString("Flags");
16197                         json.BeginArray(true);
16198                         {
16199                             if (heapInfo.flags & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT)
16200                                 json.WriteString("DEVICE_LOCAL");
16201                         #if VMA_VULKAN_VERSION >= 1001000
16202                             if (heapInfo.flags & VK_MEMORY_HEAP_MULTI_INSTANCE_BIT)
16203                                 json.WriteString("MULTI_INSTANCE");
16204                         #endif
16205 
16206                             VkMemoryHeapFlags flags = heapInfo.flags &
16207                                 ~(VK_MEMORY_HEAP_DEVICE_LOCAL_BIT
16208                         #if VMA_VULKAN_VERSION >= 1001000
16209                                     | VK_MEMORY_HEAP_MULTI_INSTANCE_BIT
16210                         #endif
16211                                     );
16212                             if (flags != 0)
16213                                 json.WriteNumber(flags);
16214                         }
16215                         json.EndArray();
16216 
16217                         json.WriteString("Size");
16218                         json.WriteNumber(heapInfo.size);
16219 
16220                         json.WriteString("Budget");
16221                         json.BeginObject();
16222                         {
16223                             json.WriteString("BudgetBytes");
16224                             json.WriteNumber(budgets[heapIndex].budget);
16225                             json.WriteString("UsageBytes");
16226                             json.WriteNumber(budgets[heapIndex].usage);
16227                         }
16228                         json.EndObject();
16229 
16230                         json.WriteString("Stats");
16231                         VmaPrintDetailedStatistics(json, stats.memoryHeap[heapIndex]);
16232 
16233                         json.WriteString("MemoryPools");
16234                         json.BeginObject();
16235                         {
16236                             for (uint32_t typeIndex = 0; typeIndex < allocator->GetMemoryTypeCount(); ++typeIndex)
16237                             {
16238                                 if (allocator->MemoryTypeIndexToHeapIndex(typeIndex) == heapIndex)
16239                                 {
16240                                     json.BeginString("Type ");
16241                                     json.ContinueString(typeIndex);
16242                                     json.EndString();
16243                                     json.BeginObject();
16244                                     {
16245                                         json.WriteString("Flags");
16246                                         json.BeginArray(true);
16247                                         {
16248                                             VkMemoryPropertyFlags flags = allocator->m_MemProps.memoryTypes[typeIndex].propertyFlags;
16249                                             if (flags & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT)
16250                                                 json.WriteString("DEVICE_LOCAL");
16251                                             if (flags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT)
16252                                                 json.WriteString("HOST_VISIBLE");
16253                                             if (flags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)
16254                                                 json.WriteString("HOST_COHERENT");
16255                                             if (flags & VK_MEMORY_PROPERTY_HOST_CACHED_BIT)
16256                                                 json.WriteString("HOST_CACHED");
16257                                             if (flags & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT)
16258                                                 json.WriteString("LAZILY_ALLOCATED");
16259                                         #if VMA_VULKAN_VERSION >= 1001000
16260                                             if (flags & VK_MEMORY_PROPERTY_PROTECTED_BIT)
16261                                                 json.WriteString("PROTECTED");
16262                                         #endif
16263                                         #if VK_AMD_device_coherent_memory
16264                                             if (flags & VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY)
16265                                                 json.WriteString("DEVICE_COHERENT_AMD");
16266                                             if (flags & VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY)
16267                                                 json.WriteString("DEVICE_UNCACHED_AMD");
16268                                         #endif
16269 
16270                                             flags &= ~(VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT
16271                                         #if VMA_VULKAN_VERSION >= 1001000
16272                                                 | VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT
16273                                         #endif
16274                                         #if VK_AMD_device_coherent_memory
16275                                                 | VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY
16276                                                 | VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY
16277                                         #endif
16278                                                 | VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT
16279                                                 | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT
16280                                                 | VK_MEMORY_PROPERTY_HOST_CACHED_BIT);
16281                                             if (flags != 0)
16282                                                 json.WriteNumber(flags);
16283                                         }
16284                                         json.EndArray();
16285 
16286                                         json.WriteString("Stats");
16287                                         VmaPrintDetailedStatistics(json, stats.memoryType[typeIndex]);
16288                                     }
16289                                     json.EndObject();
16290                                 }
16291                             }
16292 
16293                         }
16294                         json.EndObject();
16295                     }
16296                     json.EndObject();
16297                 }
16298             }
16299             json.EndObject();
16300         }
16301 
16302         if (detailedMap == VK_TRUE)
16303             allocator->PrintDetailedMap(json);
16304 
16305         json.EndObject();
16306     }
16307 
16308     *ppStatsString = VmaCreateStringCopy(allocator->GetAllocationCallbacks(), sb.GetData(), sb.GetLength());
16309 }
16310 
vmaFreeStatsString(VmaAllocator allocator,char * pStatsString)16311 VMA_CALL_PRE void VMA_CALL_POST vmaFreeStatsString(
16312     VmaAllocator allocator,
16313     char* pStatsString)
16314 {
16315     if(pStatsString != VMA_NULL)
16316     {
16317         VMA_ASSERT(allocator);
16318         VmaFreeString(allocator->GetAllocationCallbacks(), pStatsString);
16319     }
16320 }
16321 
16322 #endif // VMA_STATS_STRING_ENABLED
16323 
16324 /*
16325 This function is not protected by any mutex because it just reads immutable data.
16326 */
vmaFindMemoryTypeIndex(VmaAllocator allocator,uint32_t memoryTypeBits,const VmaAllocationCreateInfo * pAllocationCreateInfo,uint32_t * pMemoryTypeIndex)16327 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndex(
16328     VmaAllocator allocator,
16329     uint32_t memoryTypeBits,
16330     const VmaAllocationCreateInfo* pAllocationCreateInfo,
16331     uint32_t* pMemoryTypeIndex)
16332 {
16333     VMA_ASSERT(allocator != VK_NULL_HANDLE);
16334     VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
16335     VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
16336 
16337     return allocator->FindMemoryTypeIndex(memoryTypeBits, pAllocationCreateInfo, UINT32_MAX, pMemoryTypeIndex);
16338 }
16339 
vmaFindMemoryTypeIndexForBufferInfo(VmaAllocator allocator,const VkBufferCreateInfo * pBufferCreateInfo,const VmaAllocationCreateInfo * pAllocationCreateInfo,uint32_t * pMemoryTypeIndex)16340 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForBufferInfo(
16341     VmaAllocator allocator,
16342     const VkBufferCreateInfo* pBufferCreateInfo,
16343     const VmaAllocationCreateInfo* pAllocationCreateInfo,
16344     uint32_t* pMemoryTypeIndex)
16345 {
16346     VMA_ASSERT(allocator != VK_NULL_HANDLE);
16347     VMA_ASSERT(pBufferCreateInfo != VMA_NULL);
16348     VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
16349     VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
16350 
16351     const VkDevice hDev = allocator->m_hDevice;
16352     const VmaVulkanFunctions* funcs = &allocator->GetVulkanFunctions();
16353     VkResult res;
16354 
16355 #if VMA_VULKAN_VERSION >= 1003000
16356     if(funcs->vkGetDeviceBufferMemoryRequirements)
16357     {
16358         // Can query straight from VkBufferCreateInfo :)
16359         VkDeviceBufferMemoryRequirements devBufMemReq = {VK_STRUCTURE_TYPE_DEVICE_BUFFER_MEMORY_REQUIREMENTS};
16360         devBufMemReq.pCreateInfo = pBufferCreateInfo;
16361 
16362         VkMemoryRequirements2 memReq = {VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2};
16363         (*funcs->vkGetDeviceBufferMemoryRequirements)(hDev, &devBufMemReq, &memReq);
16364 
16365         res = allocator->FindMemoryTypeIndex(
16366             memReq.memoryRequirements.memoryTypeBits, pAllocationCreateInfo, pBufferCreateInfo->usage, pMemoryTypeIndex);
16367     }
16368     else
16369 #endif // #if VMA_VULKAN_VERSION >= 1003000
16370     {
16371         // Must create a dummy buffer to query :(
16372         VkBuffer hBuffer = VK_NULL_HANDLE;
16373         res = funcs->vkCreateBuffer(
16374             hDev, pBufferCreateInfo, allocator->GetAllocationCallbacks(), &hBuffer);
16375         if(res == VK_SUCCESS)
16376         {
16377             VkMemoryRequirements memReq = {};
16378             funcs->vkGetBufferMemoryRequirements(hDev, hBuffer, &memReq);
16379 
16380             res = allocator->FindMemoryTypeIndex(
16381                 memReq.memoryTypeBits, pAllocationCreateInfo, pBufferCreateInfo->usage, pMemoryTypeIndex);
16382 
16383             funcs->vkDestroyBuffer(
16384                 hDev, hBuffer, allocator->GetAllocationCallbacks());
16385         }
16386     }
16387     return res;
16388 }
16389 
vmaFindMemoryTypeIndexForImageInfo(VmaAllocator allocator,const VkImageCreateInfo * pImageCreateInfo,const VmaAllocationCreateInfo * pAllocationCreateInfo,uint32_t * pMemoryTypeIndex)16390 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForImageInfo(
16391     VmaAllocator allocator,
16392     const VkImageCreateInfo* pImageCreateInfo,
16393     const VmaAllocationCreateInfo* pAllocationCreateInfo,
16394     uint32_t* pMemoryTypeIndex)
16395 {
16396     VMA_ASSERT(allocator != VK_NULL_HANDLE);
16397     VMA_ASSERT(pImageCreateInfo != VMA_NULL);
16398     VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
16399     VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
16400 
16401     const VkDevice hDev = allocator->m_hDevice;
16402     const VmaVulkanFunctions* funcs = &allocator->GetVulkanFunctions();
16403     VkResult res;
16404 
16405 #if VMA_VULKAN_VERSION >= 1003000
16406     if(funcs->vkGetDeviceImageMemoryRequirements)
16407     {
16408         // Can query straight from VkImageCreateInfo :)
16409         VkDeviceImageMemoryRequirements devImgMemReq = {VK_STRUCTURE_TYPE_DEVICE_IMAGE_MEMORY_REQUIREMENTS};
16410         devImgMemReq.pCreateInfo = pImageCreateInfo;
16411         VMA_ASSERT(pImageCreateInfo->tiling != VK_IMAGE_TILING_DRM_FORMAT_MODIFIER_EXT_COPY && (pImageCreateInfo->flags & VK_IMAGE_CREATE_DISJOINT_BIT_COPY) == 0 &&
16412             "Cannot use this VkImageCreateInfo with vmaFindMemoryTypeIndexForImageInfo as I don't know what to pass as VkDeviceImageMemoryRequirements::planeAspect.");
16413 
16414         VkMemoryRequirements2 memReq = {VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2};
16415         (*funcs->vkGetDeviceImageMemoryRequirements)(hDev, &devImgMemReq, &memReq);
16416 
16417         res = allocator->FindMemoryTypeIndex(
16418             memReq.memoryRequirements.memoryTypeBits, pAllocationCreateInfo, pImageCreateInfo->usage, pMemoryTypeIndex);
16419     }
16420     else
16421 #endif // #if VMA_VULKAN_VERSION >= 1003000
16422     {
16423         // Must create a dummy image to query :(
16424         VkImage hImage = VK_NULL_HANDLE;
16425         res = funcs->vkCreateImage(
16426             hDev, pImageCreateInfo, allocator->GetAllocationCallbacks(), &hImage);
16427         if(res == VK_SUCCESS)
16428         {
16429             VkMemoryRequirements memReq = {};
16430             funcs->vkGetImageMemoryRequirements(hDev, hImage, &memReq);
16431 
16432             res = allocator->FindMemoryTypeIndex(
16433                 memReq.memoryTypeBits, pAllocationCreateInfo, pImageCreateInfo->usage, pMemoryTypeIndex);
16434 
16435             funcs->vkDestroyImage(
16436                 hDev, hImage, allocator->GetAllocationCallbacks());
16437         }
16438     }
16439     return res;
16440 }
16441 
vmaCreatePool(VmaAllocator allocator,const VmaPoolCreateInfo * pCreateInfo,VmaPool * pPool)16442 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreatePool(
16443     VmaAllocator allocator,
16444     const VmaPoolCreateInfo* pCreateInfo,
16445     VmaPool* pPool)
16446 {
16447     VMA_ASSERT(allocator && pCreateInfo && pPool);
16448 
16449     VMA_DEBUG_LOG("vmaCreatePool");
16450 
16451     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16452 
16453     return allocator->CreatePool(pCreateInfo, pPool);
16454 }
16455 
vmaDestroyPool(VmaAllocator allocator,VmaPool pool)16456 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyPool(
16457     VmaAllocator allocator,
16458     VmaPool pool)
16459 {
16460     VMA_ASSERT(allocator);
16461 
16462     if(pool == VK_NULL_HANDLE)
16463     {
16464         return;
16465     }
16466 
16467     VMA_DEBUG_LOG("vmaDestroyPool");
16468 
16469     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16470 
16471     allocator->DestroyPool(pool);
16472 }
16473 
vmaGetPoolStatistics(VmaAllocator allocator,VmaPool pool,VmaStatistics * pPoolStats)16474 VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolStatistics(
16475     VmaAllocator allocator,
16476     VmaPool pool,
16477     VmaStatistics* pPoolStats)
16478 {
16479     VMA_ASSERT(allocator && pool && pPoolStats);
16480 
16481     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16482 
16483     allocator->GetPoolStatistics(pool, pPoolStats);
16484 }
16485 
vmaCalculatePoolStatistics(VmaAllocator allocator,VmaPool pool,VmaDetailedStatistics * pPoolStats)16486 VMA_CALL_PRE void VMA_CALL_POST vmaCalculatePoolStatistics(
16487     VmaAllocator allocator,
16488     VmaPool pool,
16489     VmaDetailedStatistics* pPoolStats)
16490 {
16491     VMA_ASSERT(allocator && pool && pPoolStats);
16492 
16493     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16494 
16495     allocator->CalculatePoolStatistics(pool, pPoolStats);
16496 }
16497 
vmaCheckPoolCorruption(VmaAllocator allocator,VmaPool pool)16498 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckPoolCorruption(VmaAllocator allocator, VmaPool pool)
16499 {
16500     VMA_ASSERT(allocator && pool);
16501 
16502     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16503 
16504     VMA_DEBUG_LOG("vmaCheckPoolCorruption");
16505 
16506     return allocator->CheckPoolCorruption(pool);
16507 }
16508 
vmaGetPoolName(VmaAllocator allocator,VmaPool pool,const char ** ppName)16509 VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolName(
16510     VmaAllocator allocator,
16511     VmaPool pool,
16512     const char** ppName)
16513 {
16514     VMA_ASSERT(allocator && pool && ppName);
16515 
16516     VMA_DEBUG_LOG("vmaGetPoolName");
16517 
16518     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16519 
16520     *ppName = pool->GetName();
16521 }
16522 
vmaSetPoolName(VmaAllocator allocator,VmaPool pool,const char * pName)16523 VMA_CALL_PRE void VMA_CALL_POST vmaSetPoolName(
16524     VmaAllocator allocator,
16525     VmaPool pool,
16526     const char* pName)
16527 {
16528     VMA_ASSERT(allocator && pool);
16529 
16530     VMA_DEBUG_LOG("vmaSetPoolName");
16531 
16532     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16533 
16534     pool->SetName(pName);
16535 }
16536 
vmaAllocateMemory(VmaAllocator allocator,const VkMemoryRequirements * pVkMemoryRequirements,const VmaAllocationCreateInfo * pCreateInfo,VmaAllocation * pAllocation,VmaAllocationInfo * pAllocationInfo)16537 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemory(
16538     VmaAllocator allocator,
16539     const VkMemoryRequirements* pVkMemoryRequirements,
16540     const VmaAllocationCreateInfo* pCreateInfo,
16541     VmaAllocation* pAllocation,
16542     VmaAllocationInfo* pAllocationInfo)
16543 {
16544     VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocation);
16545 
16546     VMA_DEBUG_LOG("vmaAllocateMemory");
16547 
16548     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16549 
16550     VkResult result = allocator->AllocateMemory(
16551         *pVkMemoryRequirements,
16552         false, // requiresDedicatedAllocation
16553         false, // prefersDedicatedAllocation
16554         VK_NULL_HANDLE, // dedicatedBuffer
16555         VK_NULL_HANDLE, // dedicatedImage
16556         UINT32_MAX, // dedicatedBufferImageUsage
16557         *pCreateInfo,
16558         VMA_SUBALLOCATION_TYPE_UNKNOWN,
16559         1, // allocationCount
16560         pAllocation);
16561 
16562     if(pAllocationInfo != VMA_NULL && result == VK_SUCCESS)
16563     {
16564         allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
16565     }
16566 
16567     return result;
16568 }
16569 
vmaAllocateMemoryPages(VmaAllocator allocator,const VkMemoryRequirements * pVkMemoryRequirements,const VmaAllocationCreateInfo * pCreateInfo,size_t allocationCount,VmaAllocation * pAllocations,VmaAllocationInfo * pAllocationInfo)16570 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryPages(
16571     VmaAllocator allocator,
16572     const VkMemoryRequirements* pVkMemoryRequirements,
16573     const VmaAllocationCreateInfo* pCreateInfo,
16574     size_t allocationCount,
16575     VmaAllocation* pAllocations,
16576     VmaAllocationInfo* pAllocationInfo)
16577 {
16578     if(allocationCount == 0)
16579     {
16580         return VK_SUCCESS;
16581     }
16582 
16583     VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocations);
16584 
16585     VMA_DEBUG_LOG("vmaAllocateMemoryPages");
16586 
16587     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16588 
16589     VkResult result = allocator->AllocateMemory(
16590         *pVkMemoryRequirements,
16591         false, // requiresDedicatedAllocation
16592         false, // prefersDedicatedAllocation
16593         VK_NULL_HANDLE, // dedicatedBuffer
16594         VK_NULL_HANDLE, // dedicatedImage
16595         UINT32_MAX, // dedicatedBufferImageUsage
16596         *pCreateInfo,
16597         VMA_SUBALLOCATION_TYPE_UNKNOWN,
16598         allocationCount,
16599         pAllocations);
16600 
16601     if(pAllocationInfo != VMA_NULL && result == VK_SUCCESS)
16602     {
16603         for(size_t i = 0; i < allocationCount; ++i)
16604         {
16605             allocator->GetAllocationInfo(pAllocations[i], pAllocationInfo + i);
16606         }
16607     }
16608 
16609     return result;
16610 }
16611 
vmaAllocateMemoryForBuffer(VmaAllocator allocator,VkBuffer buffer,const VmaAllocationCreateInfo * pCreateInfo,VmaAllocation * pAllocation,VmaAllocationInfo * pAllocationInfo)16612 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForBuffer(
16613     VmaAllocator allocator,
16614     VkBuffer buffer,
16615     const VmaAllocationCreateInfo* pCreateInfo,
16616     VmaAllocation* pAllocation,
16617     VmaAllocationInfo* pAllocationInfo)
16618 {
16619     VMA_ASSERT(allocator && buffer != VK_NULL_HANDLE && pCreateInfo && pAllocation);
16620 
16621     VMA_DEBUG_LOG("vmaAllocateMemoryForBuffer");
16622 
16623     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16624 
16625     VkMemoryRequirements vkMemReq = {};
16626     bool requiresDedicatedAllocation = false;
16627     bool prefersDedicatedAllocation = false;
16628     allocator->GetBufferMemoryRequirements(buffer, vkMemReq,
16629         requiresDedicatedAllocation,
16630         prefersDedicatedAllocation);
16631 
16632     VkResult result = allocator->AllocateMemory(
16633         vkMemReq,
16634         requiresDedicatedAllocation,
16635         prefersDedicatedAllocation,
16636         buffer, // dedicatedBuffer
16637         VK_NULL_HANDLE, // dedicatedImage
16638         UINT32_MAX, // dedicatedBufferImageUsage
16639         *pCreateInfo,
16640         VMA_SUBALLOCATION_TYPE_BUFFER,
16641         1, // allocationCount
16642         pAllocation);
16643 
16644     if(pAllocationInfo && result == VK_SUCCESS)
16645     {
16646         allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
16647     }
16648 
16649     return result;
16650 }
16651 
vmaAllocateMemoryForImage(VmaAllocator allocator,VkImage image,const VmaAllocationCreateInfo * pCreateInfo,VmaAllocation * pAllocation,VmaAllocationInfo * pAllocationInfo)16652 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForImage(
16653     VmaAllocator allocator,
16654     VkImage image,
16655     const VmaAllocationCreateInfo* pCreateInfo,
16656     VmaAllocation* pAllocation,
16657     VmaAllocationInfo* pAllocationInfo)
16658 {
16659     VMA_ASSERT(allocator && image != VK_NULL_HANDLE && pCreateInfo && pAllocation);
16660 
16661     VMA_DEBUG_LOG("vmaAllocateMemoryForImage");
16662 
16663     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16664 
16665     VkMemoryRequirements vkMemReq = {};
16666     bool requiresDedicatedAllocation = false;
16667     bool prefersDedicatedAllocation  = false;
16668     allocator->GetImageMemoryRequirements(image, vkMemReq,
16669         requiresDedicatedAllocation, prefersDedicatedAllocation);
16670 
16671     VkResult result = allocator->AllocateMemory(
16672         vkMemReq,
16673         requiresDedicatedAllocation,
16674         prefersDedicatedAllocation,
16675         VK_NULL_HANDLE, // dedicatedBuffer
16676         image, // dedicatedImage
16677         UINT32_MAX, // dedicatedBufferImageUsage
16678         *pCreateInfo,
16679         VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN,
16680         1, // allocationCount
16681         pAllocation);
16682 
16683     if(pAllocationInfo && result == VK_SUCCESS)
16684     {
16685         allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
16686     }
16687 
16688     return result;
16689 }
16690 
vmaFreeMemory(VmaAllocator allocator,VmaAllocation allocation)16691 VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemory(
16692     VmaAllocator allocator,
16693     VmaAllocation allocation)
16694 {
16695     VMA_ASSERT(allocator);
16696 
16697     if(allocation == VK_NULL_HANDLE)
16698     {
16699         return;
16700     }
16701 
16702     VMA_DEBUG_LOG("vmaFreeMemory");
16703 
16704     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16705 
16706     allocator->FreeMemory(
16707         1, // allocationCount
16708         &allocation);
16709 }
16710 
vmaFreeMemoryPages(VmaAllocator allocator,size_t allocationCount,const VmaAllocation * pAllocations)16711 VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemoryPages(
16712     VmaAllocator allocator,
16713     size_t allocationCount,
16714     const VmaAllocation* pAllocations)
16715 {
16716     if(allocationCount == 0)
16717     {
16718         return;
16719     }
16720 
16721     VMA_ASSERT(allocator);
16722 
16723     VMA_DEBUG_LOG("vmaFreeMemoryPages");
16724 
16725     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16726 
16727     allocator->FreeMemory(allocationCount, pAllocations);
16728 }
16729 
vmaGetAllocationInfo(VmaAllocator allocator,VmaAllocation allocation,VmaAllocationInfo * pAllocationInfo)16730 VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationInfo(
16731     VmaAllocator allocator,
16732     VmaAllocation allocation,
16733     VmaAllocationInfo* pAllocationInfo)
16734 {
16735     VMA_ASSERT(allocator && allocation && pAllocationInfo);
16736 
16737     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16738 
16739     allocator->GetAllocationInfo(allocation, pAllocationInfo);
16740 }
16741 
vmaSetAllocationUserData(VmaAllocator allocator,VmaAllocation allocation,void * pUserData)16742 VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationUserData(
16743     VmaAllocator allocator,
16744     VmaAllocation allocation,
16745     void* pUserData)
16746 {
16747     VMA_ASSERT(allocator && allocation);
16748 
16749     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16750 
16751     allocation->SetUserData(allocator, pUserData);
16752 }
16753 
vmaSetAllocationName(VmaAllocator VMA_NOT_NULL allocator,VmaAllocation VMA_NOT_NULL allocation,const char * VMA_NULLABLE pName)16754 VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationName(
16755     VmaAllocator VMA_NOT_NULL allocator,
16756     VmaAllocation VMA_NOT_NULL allocation,
16757     const char* VMA_NULLABLE pName)
16758 {
16759     allocation->SetName(allocator, pName);
16760 }
16761 
vmaGetAllocationMemoryProperties(VmaAllocator VMA_NOT_NULL allocator,VmaAllocation VMA_NOT_NULL allocation,VkMemoryPropertyFlags * VMA_NOT_NULL pFlags)16762 VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationMemoryProperties(
16763     VmaAllocator VMA_NOT_NULL allocator,
16764     VmaAllocation VMA_NOT_NULL allocation,
16765     VkMemoryPropertyFlags* VMA_NOT_NULL pFlags)
16766 {
16767     VMA_ASSERT(allocator && allocation && pFlags);
16768     const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
16769     *pFlags = allocator->m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
16770 }
16771 
vmaMapMemory(VmaAllocator allocator,VmaAllocation allocation,void ** ppData)16772 VMA_CALL_PRE VkResult VMA_CALL_POST vmaMapMemory(
16773     VmaAllocator allocator,
16774     VmaAllocation allocation,
16775     void** ppData)
16776 {
16777     VMA_ASSERT(allocator && allocation && ppData);
16778 
16779     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16780 
16781     return allocator->Map(allocation, ppData);
16782 }
16783 
vmaUnmapMemory(VmaAllocator allocator,VmaAllocation allocation)16784 VMA_CALL_PRE void VMA_CALL_POST vmaUnmapMemory(
16785     VmaAllocator allocator,
16786     VmaAllocation allocation)
16787 {
16788     VMA_ASSERT(allocator && allocation);
16789 
16790     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16791 
16792     allocator->Unmap(allocation);
16793 }
16794 
vmaFlushAllocation(VmaAllocator allocator,VmaAllocation allocation,VkDeviceSize offset,VkDeviceSize size)16795 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocation(
16796     VmaAllocator allocator,
16797     VmaAllocation allocation,
16798     VkDeviceSize offset,
16799     VkDeviceSize size)
16800 {
16801     VMA_ASSERT(allocator && allocation);
16802 
16803     VMA_DEBUG_LOG("vmaFlushAllocation");
16804 
16805     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16806 
16807     const VkResult res = allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_FLUSH);
16808 
16809     return res;
16810 }
16811 
vmaInvalidateAllocation(VmaAllocator allocator,VmaAllocation allocation,VkDeviceSize offset,VkDeviceSize size)16812 VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocation(
16813     VmaAllocator allocator,
16814     VmaAllocation allocation,
16815     VkDeviceSize offset,
16816     VkDeviceSize size)
16817 {
16818     VMA_ASSERT(allocator && allocation);
16819 
16820     VMA_DEBUG_LOG("vmaInvalidateAllocation");
16821 
16822     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16823 
16824     const VkResult res = allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_INVALIDATE);
16825 
16826     return res;
16827 }
16828 
vmaFlushAllocations(VmaAllocator allocator,uint32_t allocationCount,const VmaAllocation * allocations,const VkDeviceSize * offsets,const VkDeviceSize * sizes)16829 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocations(
16830     VmaAllocator allocator,
16831     uint32_t allocationCount,
16832     const VmaAllocation* allocations,
16833     const VkDeviceSize* offsets,
16834     const VkDeviceSize* sizes)
16835 {
16836     VMA_ASSERT(allocator);
16837 
16838     if(allocationCount == 0)
16839     {
16840         return VK_SUCCESS;
16841     }
16842 
16843     VMA_ASSERT(allocations);
16844 
16845     VMA_DEBUG_LOG("vmaFlushAllocations");
16846 
16847     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16848 
16849     const VkResult res = allocator->FlushOrInvalidateAllocations(allocationCount, allocations, offsets, sizes, VMA_CACHE_FLUSH);
16850 
16851     return res;
16852 }
16853 
vmaInvalidateAllocations(VmaAllocator allocator,uint32_t allocationCount,const VmaAllocation * allocations,const VkDeviceSize * offsets,const VkDeviceSize * sizes)16854 VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocations(
16855     VmaAllocator allocator,
16856     uint32_t allocationCount,
16857     const VmaAllocation* allocations,
16858     const VkDeviceSize* offsets,
16859     const VkDeviceSize* sizes)
16860 {
16861     VMA_ASSERT(allocator);
16862 
16863     if(allocationCount == 0)
16864     {
16865         return VK_SUCCESS;
16866     }
16867 
16868     VMA_ASSERT(allocations);
16869 
16870     VMA_DEBUG_LOG("vmaInvalidateAllocations");
16871 
16872     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16873 
16874     const VkResult res = allocator->FlushOrInvalidateAllocations(allocationCount, allocations, offsets, sizes, VMA_CACHE_INVALIDATE);
16875 
16876     return res;
16877 }
16878 
vmaCheckCorruption(VmaAllocator allocator,uint32_t memoryTypeBits)16879 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckCorruption(
16880     VmaAllocator allocator,
16881     uint32_t memoryTypeBits)
16882 {
16883     VMA_ASSERT(allocator);
16884 
16885     VMA_DEBUG_LOG("vmaCheckCorruption");
16886 
16887     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16888 
16889     return allocator->CheckCorruption(memoryTypeBits);
16890 }
16891 
vmaBeginDefragmentation(VmaAllocator allocator,const VmaDefragmentationInfo * pInfo,VmaDefragmentationContext * pContext)16892 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentation(
16893     VmaAllocator allocator,
16894     const VmaDefragmentationInfo* pInfo,
16895     VmaDefragmentationContext* pContext)
16896 {
16897     VMA_ASSERT(allocator && pInfo && pContext);
16898 
16899     VMA_DEBUG_LOG("vmaBeginDefragmentation");
16900 
16901     if (pInfo->pool != VMA_NULL)
16902     {
16903         // Check if run on supported algorithms
16904         if (pInfo->pool->m_BlockVector.GetAlgorithm() & VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT)
16905             return VK_ERROR_FEATURE_NOT_PRESENT;
16906     }
16907 
16908     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16909 
16910     *pContext = vma_new(allocator, VmaDefragmentationContext_T)(allocator, *pInfo);
16911     return VK_SUCCESS;
16912 }
16913 
vmaEndDefragmentation(VmaAllocator allocator,VmaDefragmentationContext context,VmaDefragmentationStats * pStats)16914 VMA_CALL_PRE void VMA_CALL_POST vmaEndDefragmentation(
16915     VmaAllocator allocator,
16916     VmaDefragmentationContext context,
16917     VmaDefragmentationStats* pStats)
16918 {
16919     VMA_ASSERT(allocator && context);
16920 
16921     VMA_DEBUG_LOG("vmaEndDefragmentation");
16922 
16923     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16924 
16925     if (pStats)
16926         context->GetStats(*pStats);
16927     vma_delete(allocator, context);
16928 }
16929 
vmaBeginDefragmentationPass(VmaAllocator VMA_NOT_NULL allocator,VmaDefragmentationContext VMA_NOT_NULL context,VmaDefragmentationPassMoveInfo * VMA_NOT_NULL pPassInfo)16930 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentationPass(
16931     VmaAllocator VMA_NOT_NULL allocator,
16932     VmaDefragmentationContext VMA_NOT_NULL context,
16933     VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo)
16934 {
16935     VMA_ASSERT(context && pPassInfo);
16936 
16937     VMA_DEBUG_LOG("vmaBeginDefragmentationPass");
16938 
16939     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16940 
16941     return context->DefragmentPassBegin(*pPassInfo);
16942 }
16943 
vmaEndDefragmentationPass(VmaAllocator VMA_NOT_NULL allocator,VmaDefragmentationContext VMA_NOT_NULL context,VmaDefragmentationPassMoveInfo * VMA_NOT_NULL pPassInfo)16944 VMA_CALL_PRE VkResult VMA_CALL_POST vmaEndDefragmentationPass(
16945     VmaAllocator VMA_NOT_NULL allocator,
16946     VmaDefragmentationContext VMA_NOT_NULL context,
16947     VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo)
16948 {
16949     VMA_ASSERT(context && pPassInfo);
16950 
16951     VMA_DEBUG_LOG("vmaEndDefragmentationPass");
16952 
16953     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16954 
16955     return context->DefragmentPassEnd(*pPassInfo);
16956 }
16957 
vmaBindBufferMemory(VmaAllocator allocator,VmaAllocation allocation,VkBuffer buffer)16958 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory(
16959     VmaAllocator allocator,
16960     VmaAllocation allocation,
16961     VkBuffer buffer)
16962 {
16963     VMA_ASSERT(allocator && allocation && buffer);
16964 
16965     VMA_DEBUG_LOG("vmaBindBufferMemory");
16966 
16967     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16968 
16969     return allocator->BindBufferMemory(allocation, 0, buffer, VMA_NULL);
16970 }
16971 
vmaBindBufferMemory2(VmaAllocator allocator,VmaAllocation allocation,VkDeviceSize allocationLocalOffset,VkBuffer buffer,const void * pNext)16972 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory2(
16973     VmaAllocator allocator,
16974     VmaAllocation allocation,
16975     VkDeviceSize allocationLocalOffset,
16976     VkBuffer buffer,
16977     const void* pNext)
16978 {
16979     VMA_ASSERT(allocator && allocation && buffer);
16980 
16981     VMA_DEBUG_LOG("vmaBindBufferMemory2");
16982 
16983     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16984 
16985     return allocator->BindBufferMemory(allocation, allocationLocalOffset, buffer, pNext);
16986 }
16987 
vmaBindImageMemory(VmaAllocator allocator,VmaAllocation allocation,VkImage image)16988 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory(
16989     VmaAllocator allocator,
16990     VmaAllocation allocation,
16991     VkImage image)
16992 {
16993     VMA_ASSERT(allocator && allocation && image);
16994 
16995     VMA_DEBUG_LOG("vmaBindImageMemory");
16996 
16997     VMA_DEBUG_GLOBAL_MUTEX_LOCK
16998 
16999     return allocator->BindImageMemory(allocation, 0, image, VMA_NULL);
17000 }
17001 
vmaBindImageMemory2(VmaAllocator allocator,VmaAllocation allocation,VkDeviceSize allocationLocalOffset,VkImage image,const void * pNext)17002 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory2(
17003     VmaAllocator allocator,
17004     VmaAllocation allocation,
17005     VkDeviceSize allocationLocalOffset,
17006     VkImage image,
17007     const void* pNext)
17008 {
17009     VMA_ASSERT(allocator && allocation && image);
17010 
17011     VMA_DEBUG_LOG("vmaBindImageMemory2");
17012 
17013     VMA_DEBUG_GLOBAL_MUTEX_LOCK
17014 
17015         return allocator->BindImageMemory(allocation, allocationLocalOffset, image, pNext);
17016 }
17017 
vmaCreateBuffer(VmaAllocator allocator,const VkBufferCreateInfo * pBufferCreateInfo,const VmaAllocationCreateInfo * pAllocationCreateInfo,VkBuffer * pBuffer,VmaAllocation * pAllocation,VmaAllocationInfo * pAllocationInfo)17018 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBuffer(
17019     VmaAllocator allocator,
17020     const VkBufferCreateInfo* pBufferCreateInfo,
17021     const VmaAllocationCreateInfo* pAllocationCreateInfo,
17022     VkBuffer* pBuffer,
17023     VmaAllocation* pAllocation,
17024     VmaAllocationInfo* pAllocationInfo)
17025 {
17026     VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && pBuffer && pAllocation);
17027 
17028     if(pBufferCreateInfo->size == 0)
17029     {
17030         return VK_ERROR_INITIALIZATION_FAILED;
17031     }
17032     if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
17033         !allocator->m_UseKhrBufferDeviceAddress)
17034     {
17035         VMA_ASSERT(0 && "Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
17036         return VK_ERROR_INITIALIZATION_FAILED;
17037     }
17038 
17039     VMA_DEBUG_LOG("vmaCreateBuffer");
17040 
17041     VMA_DEBUG_GLOBAL_MUTEX_LOCK
17042 
17043     *pBuffer = VK_NULL_HANDLE;
17044     *pAllocation = VK_NULL_HANDLE;
17045 
17046     // 1. Create VkBuffer.
17047     VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
17048         allocator->m_hDevice,
17049         pBufferCreateInfo,
17050         allocator->GetAllocationCallbacks(),
17051         pBuffer);
17052     if(res >= 0)
17053     {
17054         // 2. vkGetBufferMemoryRequirements.
17055         VkMemoryRequirements vkMemReq = {};
17056         bool requiresDedicatedAllocation = false;
17057         bool prefersDedicatedAllocation  = false;
17058         allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
17059             requiresDedicatedAllocation, prefersDedicatedAllocation);
17060 
17061         // 3. Allocate memory using allocator.
17062         res = allocator->AllocateMemory(
17063             vkMemReq,
17064             requiresDedicatedAllocation,
17065             prefersDedicatedAllocation,
17066             *pBuffer, // dedicatedBuffer
17067             VK_NULL_HANDLE, // dedicatedImage
17068             pBufferCreateInfo->usage, // dedicatedBufferImageUsage
17069             *pAllocationCreateInfo,
17070             VMA_SUBALLOCATION_TYPE_BUFFER,
17071             1, // allocationCount
17072             pAllocation);
17073 
17074         if(res >= 0)
17075         {
17076             // 3. Bind buffer with memory.
17077             if((pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_DONT_BIND_BIT) == 0)
17078             {
17079                 res = allocator->BindBufferMemory(*pAllocation, 0, *pBuffer, VMA_NULL);
17080             }
17081             if(res >= 0)
17082             {
17083                 // All steps succeeded.
17084                 #if VMA_STATS_STRING_ENABLED
17085                     (*pAllocation)->InitBufferImageUsage(pBufferCreateInfo->usage);
17086                 #endif
17087                 if(pAllocationInfo != VMA_NULL)
17088                 {
17089                     allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
17090                 }
17091 
17092                 return VK_SUCCESS;
17093             }
17094             allocator->FreeMemory(
17095                 1, // allocationCount
17096                 pAllocation);
17097             *pAllocation = VK_NULL_HANDLE;
17098             (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
17099             *pBuffer = VK_NULL_HANDLE;
17100             return res;
17101         }
17102         (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
17103         *pBuffer = VK_NULL_HANDLE;
17104         return res;
17105     }
17106     return res;
17107 }
17108 
vmaCreateBufferWithAlignment(VmaAllocator allocator,const VkBufferCreateInfo * pBufferCreateInfo,const VmaAllocationCreateInfo * pAllocationCreateInfo,VkDeviceSize minAlignment,VkBuffer * pBuffer,VmaAllocation * pAllocation,VmaAllocationInfo * pAllocationInfo)17109 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBufferWithAlignment(
17110     VmaAllocator allocator,
17111     const VkBufferCreateInfo* pBufferCreateInfo,
17112     const VmaAllocationCreateInfo* pAllocationCreateInfo,
17113     VkDeviceSize minAlignment,
17114     VkBuffer* pBuffer,
17115     VmaAllocation* pAllocation,
17116     VmaAllocationInfo* pAllocationInfo)
17117 {
17118     VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && VmaIsPow2(minAlignment) && pBuffer && pAllocation);
17119 
17120     if(pBufferCreateInfo->size == 0)
17121     {
17122         return VK_ERROR_INITIALIZATION_FAILED;
17123     }
17124     if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
17125         !allocator->m_UseKhrBufferDeviceAddress)
17126     {
17127         VMA_ASSERT(0 && "Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
17128         return VK_ERROR_INITIALIZATION_FAILED;
17129     }
17130 
17131     VMA_DEBUG_LOG("vmaCreateBufferWithAlignment");
17132 
17133     VMA_DEBUG_GLOBAL_MUTEX_LOCK
17134 
17135     *pBuffer = VK_NULL_HANDLE;
17136     *pAllocation = VK_NULL_HANDLE;
17137 
17138     // 1. Create VkBuffer.
17139     VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
17140         allocator->m_hDevice,
17141         pBufferCreateInfo,
17142         allocator->GetAllocationCallbacks(),
17143         pBuffer);
17144     if(res >= 0)
17145     {
17146         // 2. vkGetBufferMemoryRequirements.
17147         VkMemoryRequirements vkMemReq = {};
17148         bool requiresDedicatedAllocation = false;
17149         bool prefersDedicatedAllocation  = false;
17150         allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
17151             requiresDedicatedAllocation, prefersDedicatedAllocation);
17152 
17153         // 2a. Include minAlignment
17154         vkMemReq.alignment = VMA_MAX(vkMemReq.alignment, minAlignment);
17155 
17156         // 3. Allocate memory using allocator.
17157         res = allocator->AllocateMemory(
17158             vkMemReq,
17159             requiresDedicatedAllocation,
17160             prefersDedicatedAllocation,
17161             *pBuffer, // dedicatedBuffer
17162             VK_NULL_HANDLE, // dedicatedImage
17163             pBufferCreateInfo->usage, // dedicatedBufferImageUsage
17164             *pAllocationCreateInfo,
17165             VMA_SUBALLOCATION_TYPE_BUFFER,
17166             1, // allocationCount
17167             pAllocation);
17168 
17169         if(res >= 0)
17170         {
17171             // 3. Bind buffer with memory.
17172             if((pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_DONT_BIND_BIT) == 0)
17173             {
17174                 res = allocator->BindBufferMemory(*pAllocation, 0, *pBuffer, VMA_NULL);
17175             }
17176             if(res >= 0)
17177             {
17178                 // All steps succeeded.
17179                 #if VMA_STATS_STRING_ENABLED
17180                     (*pAllocation)->InitBufferImageUsage(pBufferCreateInfo->usage);
17181                 #endif
17182                 if(pAllocationInfo != VMA_NULL)
17183                 {
17184                     allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
17185                 }
17186 
17187                 return VK_SUCCESS;
17188             }
17189             allocator->FreeMemory(
17190                 1, // allocationCount
17191                 pAllocation);
17192             *pAllocation = VK_NULL_HANDLE;
17193             (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
17194             *pBuffer = VK_NULL_HANDLE;
17195             return res;
17196         }
17197         (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
17198         *pBuffer = VK_NULL_HANDLE;
17199         return res;
17200     }
17201     return res;
17202 }
17203 
vmaCreateAliasingBuffer(VmaAllocator VMA_NOT_NULL allocator,VmaAllocation VMA_NOT_NULL allocation,const VkBufferCreateInfo * VMA_NOT_NULL pBufferCreateInfo,VkBuffer VMA_NULLABLE_NON_DISPATCHABLE * VMA_NOT_NULL pBuffer)17204 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingBuffer(
17205     VmaAllocator VMA_NOT_NULL allocator,
17206     VmaAllocation VMA_NOT_NULL allocation,
17207     const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
17208     VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer)
17209 {
17210     VMA_ASSERT(allocator && pBufferCreateInfo && pBuffer && allocation);
17211 
17212     VMA_DEBUG_LOG("vmaCreateAliasingBuffer");
17213 
17214     *pBuffer = VK_NULL_HANDLE;
17215 
17216     if (pBufferCreateInfo->size == 0)
17217     {
17218         return VK_ERROR_INITIALIZATION_FAILED;
17219     }
17220     if ((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
17221         !allocator->m_UseKhrBufferDeviceAddress)
17222     {
17223         VMA_ASSERT(0 && "Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
17224         return VK_ERROR_INITIALIZATION_FAILED;
17225     }
17226 
17227     VMA_DEBUG_GLOBAL_MUTEX_LOCK
17228 
17229     // 1. Create VkBuffer.
17230     VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
17231         allocator->m_hDevice,
17232         pBufferCreateInfo,
17233         allocator->GetAllocationCallbacks(),
17234         pBuffer);
17235     if (res >= 0)
17236     {
17237         // 2. Bind buffer with memory.
17238         res = allocator->BindBufferMemory(allocation, 0, *pBuffer, VMA_NULL);
17239         if (res >= 0)
17240         {
17241             return VK_SUCCESS;
17242         }
17243         (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
17244     }
17245     return res;
17246 }
17247 
vmaDestroyBuffer(VmaAllocator allocator,VkBuffer buffer,VmaAllocation allocation)17248 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyBuffer(
17249     VmaAllocator allocator,
17250     VkBuffer buffer,
17251     VmaAllocation allocation)
17252 {
17253     VMA_ASSERT(allocator);
17254 
17255     if(buffer == VK_NULL_HANDLE && allocation == VK_NULL_HANDLE)
17256     {
17257         return;
17258     }
17259 
17260     VMA_DEBUG_LOG("vmaDestroyBuffer");
17261 
17262     VMA_DEBUG_GLOBAL_MUTEX_LOCK
17263 
17264     if(buffer != VK_NULL_HANDLE)
17265     {
17266         (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, buffer, allocator->GetAllocationCallbacks());
17267     }
17268 
17269     if(allocation != VK_NULL_HANDLE)
17270     {
17271         allocator->FreeMemory(
17272             1, // allocationCount
17273             &allocation);
17274     }
17275 }
17276 
vmaCreateImage(VmaAllocator allocator,const VkImageCreateInfo * pImageCreateInfo,const VmaAllocationCreateInfo * pAllocationCreateInfo,VkImage * pImage,VmaAllocation * pAllocation,VmaAllocationInfo * pAllocationInfo)17277 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateImage(
17278     VmaAllocator allocator,
17279     const VkImageCreateInfo* pImageCreateInfo,
17280     const VmaAllocationCreateInfo* pAllocationCreateInfo,
17281     VkImage* pImage,
17282     VmaAllocation* pAllocation,
17283     VmaAllocationInfo* pAllocationInfo)
17284 {
17285     VMA_ASSERT(allocator && pImageCreateInfo && pAllocationCreateInfo && pImage && pAllocation);
17286 
17287     if(pImageCreateInfo->extent.width == 0 ||
17288         pImageCreateInfo->extent.height == 0 ||
17289         pImageCreateInfo->extent.depth == 0 ||
17290         pImageCreateInfo->mipLevels == 0 ||
17291         pImageCreateInfo->arrayLayers == 0)
17292     {
17293         return VK_ERROR_INITIALIZATION_FAILED;
17294     }
17295 
17296     VMA_DEBUG_LOG("vmaCreateImage");
17297 
17298     VMA_DEBUG_GLOBAL_MUTEX_LOCK
17299 
17300     *pImage = VK_NULL_HANDLE;
17301     *pAllocation = VK_NULL_HANDLE;
17302 
17303     // 1. Create VkImage.
17304     VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
17305         allocator->m_hDevice,
17306         pImageCreateInfo,
17307         allocator->GetAllocationCallbacks(),
17308         pImage);
17309     if(res >= 0)
17310     {
17311         VmaSuballocationType suballocType = pImageCreateInfo->tiling == VK_IMAGE_TILING_OPTIMAL ?
17312             VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL :
17313             VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR;
17314 
17315         // 2. Allocate memory using allocator.
17316         VkMemoryRequirements vkMemReq = {};
17317         bool requiresDedicatedAllocation = false;
17318         bool prefersDedicatedAllocation  = false;
17319         allocator->GetImageMemoryRequirements(*pImage, vkMemReq,
17320             requiresDedicatedAllocation, prefersDedicatedAllocation);
17321 
17322         res = allocator->AllocateMemory(
17323             vkMemReq,
17324             requiresDedicatedAllocation,
17325             prefersDedicatedAllocation,
17326             VK_NULL_HANDLE, // dedicatedBuffer
17327             *pImage, // dedicatedImage
17328             pImageCreateInfo->usage, // dedicatedBufferImageUsage
17329             *pAllocationCreateInfo,
17330             suballocType,
17331             1, // allocationCount
17332             pAllocation);
17333 
17334         if(res >= 0)
17335         {
17336             // 3. Bind image with memory.
17337             if((pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_DONT_BIND_BIT) == 0)
17338             {
17339                 res = allocator->BindImageMemory(*pAllocation, 0, *pImage, VMA_NULL);
17340             }
17341             if(res >= 0)
17342             {
17343                 // All steps succeeded.
17344                 #if VMA_STATS_STRING_ENABLED
17345                     (*pAllocation)->InitBufferImageUsage(pImageCreateInfo->usage);
17346                 #endif
17347                 if(pAllocationInfo != VMA_NULL)
17348                 {
17349                     allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
17350                 }
17351 
17352                 return VK_SUCCESS;
17353             }
17354             allocator->FreeMemory(
17355                 1, // allocationCount
17356                 pAllocation);
17357             *pAllocation = VK_NULL_HANDLE;
17358             (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
17359             *pImage = VK_NULL_HANDLE;
17360             return res;
17361         }
17362         (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
17363         *pImage = VK_NULL_HANDLE;
17364         return res;
17365     }
17366     return res;
17367 }
17368 
vmaCreateAliasingImage(VmaAllocator VMA_NOT_NULL allocator,VmaAllocation VMA_NOT_NULL allocation,const VkImageCreateInfo * VMA_NOT_NULL pImageCreateInfo,VkImage VMA_NULLABLE_NON_DISPATCHABLE * VMA_NOT_NULL pImage)17369 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingImage(
17370     VmaAllocator VMA_NOT_NULL allocator,
17371     VmaAllocation VMA_NOT_NULL allocation,
17372     const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
17373     VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage)
17374 {
17375     VMA_ASSERT(allocator && pImageCreateInfo && pImage && allocation);
17376 
17377     *pImage = VK_NULL_HANDLE;
17378 
17379     VMA_DEBUG_LOG("vmaCreateImage");
17380 
17381     if (pImageCreateInfo->extent.width == 0 ||
17382         pImageCreateInfo->extent.height == 0 ||
17383         pImageCreateInfo->extent.depth == 0 ||
17384         pImageCreateInfo->mipLevels == 0 ||
17385         pImageCreateInfo->arrayLayers == 0)
17386     {
17387         return VK_ERROR_INITIALIZATION_FAILED;
17388     }
17389 
17390     VMA_DEBUG_GLOBAL_MUTEX_LOCK
17391 
17392     // 1. Create VkImage.
17393     VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
17394         allocator->m_hDevice,
17395         pImageCreateInfo,
17396         allocator->GetAllocationCallbacks(),
17397         pImage);
17398     if (res >= 0)
17399     {
17400         // 2. Bind image with memory.
17401         res = allocator->BindImageMemory(allocation, 0, *pImage, VMA_NULL);
17402         if (res >= 0)
17403         {
17404             return VK_SUCCESS;
17405         }
17406         (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
17407     }
17408     return res;
17409 }
17410 
vmaDestroyImage(VmaAllocator VMA_NOT_NULL allocator,VkImage VMA_NULLABLE_NON_DISPATCHABLE image,VmaAllocation VMA_NULLABLE allocation)17411 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyImage(
17412     VmaAllocator VMA_NOT_NULL allocator,
17413     VkImage VMA_NULLABLE_NON_DISPATCHABLE image,
17414     VmaAllocation VMA_NULLABLE allocation)
17415 {
17416     VMA_ASSERT(allocator);
17417 
17418     if(image == VK_NULL_HANDLE && allocation == VK_NULL_HANDLE)
17419     {
17420         return;
17421     }
17422 
17423     VMA_DEBUG_LOG("vmaDestroyImage");
17424 
17425     VMA_DEBUG_GLOBAL_MUTEX_LOCK
17426 
17427     if(image != VK_NULL_HANDLE)
17428     {
17429         (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, image, allocator->GetAllocationCallbacks());
17430     }
17431     if(allocation != VK_NULL_HANDLE)
17432     {
17433         allocator->FreeMemory(
17434             1, // allocationCount
17435             &allocation);
17436     }
17437 }
17438 
vmaCreateVirtualBlock(const VmaVirtualBlockCreateInfo * VMA_NOT_NULL pCreateInfo,VmaVirtualBlock VMA_NULLABLE * VMA_NOT_NULL pVirtualBlock)17439 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateVirtualBlock(
17440     const VmaVirtualBlockCreateInfo* VMA_NOT_NULL pCreateInfo,
17441     VmaVirtualBlock VMA_NULLABLE * VMA_NOT_NULL pVirtualBlock)
17442 {
17443     VMA_ASSERT(pCreateInfo && pVirtualBlock);
17444     VMA_ASSERT(pCreateInfo->size > 0);
17445     VMA_DEBUG_LOG("vmaCreateVirtualBlock");
17446     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
17447     *pVirtualBlock = vma_new(pCreateInfo->pAllocationCallbacks, VmaVirtualBlock_T)(*pCreateInfo);
17448     VkResult res = (*pVirtualBlock)->Init();
17449     if(res < 0)
17450     {
17451         vma_delete(pCreateInfo->pAllocationCallbacks, *pVirtualBlock);
17452         *pVirtualBlock = VK_NULL_HANDLE;
17453     }
17454     return res;
17455 }
17456 
vmaDestroyVirtualBlock(VmaVirtualBlock VMA_NULLABLE virtualBlock)17457 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyVirtualBlock(VmaVirtualBlock VMA_NULLABLE virtualBlock)
17458 {
17459     if(virtualBlock != VK_NULL_HANDLE)
17460     {
17461         VMA_DEBUG_LOG("vmaDestroyVirtualBlock");
17462         VMA_DEBUG_GLOBAL_MUTEX_LOCK;
17463         VkAllocationCallbacks allocationCallbacks = virtualBlock->m_AllocationCallbacks; // Have to copy the callbacks when destroying.
17464         vma_delete(&allocationCallbacks, virtualBlock);
17465     }
17466 }
17467 
vmaIsVirtualBlockEmpty(VmaVirtualBlock VMA_NOT_NULL virtualBlock)17468 VMA_CALL_PRE VkBool32 VMA_CALL_POST vmaIsVirtualBlockEmpty(VmaVirtualBlock VMA_NOT_NULL virtualBlock)
17469 {
17470     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
17471     VMA_DEBUG_LOG("vmaIsVirtualBlockEmpty");
17472     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
17473     return virtualBlock->IsEmpty() ? VK_TRUE : VK_FALSE;
17474 }
17475 
vmaGetVirtualAllocationInfo(VmaVirtualBlock VMA_NOT_NULL virtualBlock,VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation,VmaVirtualAllocationInfo * VMA_NOT_NULL pVirtualAllocInfo)17476 VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualAllocationInfo(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
17477     VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation, VmaVirtualAllocationInfo* VMA_NOT_NULL pVirtualAllocInfo)
17478 {
17479     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pVirtualAllocInfo != VMA_NULL);
17480     VMA_DEBUG_LOG("vmaGetVirtualAllocationInfo");
17481     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
17482     virtualBlock->GetAllocationInfo(allocation, *pVirtualAllocInfo);
17483 }
17484 
vmaVirtualAllocate(VmaVirtualBlock VMA_NOT_NULL virtualBlock,const VmaVirtualAllocationCreateInfo * VMA_NOT_NULL pCreateInfo,VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE * VMA_NOT_NULL pAllocation,VkDeviceSize * VMA_NULLABLE pOffset)17485 VMA_CALL_PRE VkResult VMA_CALL_POST vmaVirtualAllocate(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
17486     const VmaVirtualAllocationCreateInfo* VMA_NOT_NULL pCreateInfo, VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pAllocation,
17487     VkDeviceSize* VMA_NULLABLE pOffset)
17488 {
17489     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pCreateInfo != VMA_NULL && pAllocation != VMA_NULL);
17490     VMA_DEBUG_LOG("vmaVirtualAllocate");
17491     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
17492     return virtualBlock->Allocate(*pCreateInfo, *pAllocation, pOffset);
17493 }
17494 
vmaVirtualFree(VmaVirtualBlock VMA_NOT_NULL virtualBlock,VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE allocation)17495 VMA_CALL_PRE void VMA_CALL_POST vmaVirtualFree(VmaVirtualBlock VMA_NOT_NULL virtualBlock, VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE allocation)
17496 {
17497     if(allocation != VK_NULL_HANDLE)
17498     {
17499         VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
17500         VMA_DEBUG_LOG("vmaVirtualFree");
17501         VMA_DEBUG_GLOBAL_MUTEX_LOCK;
17502         virtualBlock->Free(allocation);
17503     }
17504 }
17505 
vmaClearVirtualBlock(VmaVirtualBlock VMA_NOT_NULL virtualBlock)17506 VMA_CALL_PRE void VMA_CALL_POST vmaClearVirtualBlock(VmaVirtualBlock VMA_NOT_NULL virtualBlock)
17507 {
17508     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
17509     VMA_DEBUG_LOG("vmaClearVirtualBlock");
17510     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
17511     virtualBlock->Clear();
17512 }
17513 
vmaSetVirtualAllocationUserData(VmaVirtualBlock VMA_NOT_NULL virtualBlock,VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation,void * VMA_NULLABLE pUserData)17514 VMA_CALL_PRE void VMA_CALL_POST vmaSetVirtualAllocationUserData(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
17515     VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation, void* VMA_NULLABLE pUserData)
17516 {
17517     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
17518     VMA_DEBUG_LOG("vmaSetVirtualAllocationUserData");
17519     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
17520     virtualBlock->SetAllocationUserData(allocation, pUserData);
17521 }
17522 
vmaGetVirtualBlockStatistics(VmaVirtualBlock VMA_NOT_NULL virtualBlock,VmaStatistics * VMA_NOT_NULL pStats)17523 VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualBlockStatistics(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
17524     VmaStatistics* VMA_NOT_NULL pStats)
17525 {
17526     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pStats != VMA_NULL);
17527     VMA_DEBUG_LOG("vmaGetVirtualBlockStatistics");
17528     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
17529     virtualBlock->GetStatistics(*pStats);
17530 }
17531 
vmaCalculateVirtualBlockStatistics(VmaVirtualBlock VMA_NOT_NULL virtualBlock,VmaDetailedStatistics * VMA_NOT_NULL pStats)17532 VMA_CALL_PRE void VMA_CALL_POST vmaCalculateVirtualBlockStatistics(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
17533     VmaDetailedStatistics* VMA_NOT_NULL pStats)
17534 {
17535     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pStats != VMA_NULL);
17536     VMA_DEBUG_LOG("vmaCalculateVirtualBlockStatistics");
17537     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
17538     virtualBlock->CalculateDetailedStatistics(*pStats);
17539 }
17540 
17541 #if VMA_STATS_STRING_ENABLED
17542 
vmaBuildVirtualBlockStatsString(VmaVirtualBlock VMA_NOT_NULL virtualBlock,char * VMA_NULLABLE * VMA_NOT_NULL ppStatsString,VkBool32 detailedMap)17543 VMA_CALL_PRE void VMA_CALL_POST vmaBuildVirtualBlockStatsString(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
17544     char* VMA_NULLABLE * VMA_NOT_NULL ppStatsString, VkBool32 detailedMap)
17545 {
17546     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && ppStatsString != VMA_NULL);
17547     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
17548     const VkAllocationCallbacks* allocationCallbacks = virtualBlock->GetAllocationCallbacks();
17549     VmaStringBuilder sb(allocationCallbacks);
17550     virtualBlock->BuildStatsString(detailedMap != VK_FALSE, sb);
17551     *ppStatsString = VmaCreateStringCopy(allocationCallbacks, sb.GetData(), sb.GetLength());
17552 }
17553 
vmaFreeVirtualBlockStatsString(VmaVirtualBlock VMA_NOT_NULL virtualBlock,char * VMA_NULLABLE pStatsString)17554 VMA_CALL_PRE void VMA_CALL_POST vmaFreeVirtualBlockStatsString(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
17555     char* VMA_NULLABLE pStatsString)
17556 {
17557     if(pStatsString != VMA_NULL)
17558     {
17559         VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
17560         VMA_DEBUG_GLOBAL_MUTEX_LOCK;
17561         VmaFreeString(virtualBlock->GetAllocationCallbacks(), pStatsString);
17562     }
17563 }
17564 #endif // VMA_STATS_STRING_ENABLED
17565 #endif // _VMA_PUBLIC_INTERFACE
17566 #endif // VMA_IMPLEMENTATION
17567 
17568 /**
17569 \page quick_start Quick start
17570 
17571 \section quick_start_project_setup Project setup
17572 
17573 Vulkan Memory Allocator comes in form of a "stb-style" single header file.
17574 You don't need to build it as a separate library project.
17575 You can add this file directly to your project and submit it to code repository next to your other source files.
17576 
17577 "Single header" doesn't mean that everything is contained in C/C++ declarations,
17578 like it tends to be in case of inline functions or C++ templates.
17579 It means that implementation is bundled with interface in a single file and needs to be extracted using preprocessor macro.
17580 If you don't do it properly, you will get linker errors.
17581 
17582 To do it properly:
17583 
17584 -# Include "vk_mem_alloc.h" file in each CPP file where you want to use the library.
17585    This includes declarations of all members of the library.
17586 -# In exactly one CPP file define following macro before this include.
17587    It enables also internal definitions.
17588 
17589 \code
17590 #define VMA_IMPLEMENTATION
17591 #include "vk_mem_alloc.h"
17592 \endcode
17593 
17594 It may be a good idea to create dedicated CPP file just for this purpose.
17595 
17596 This library includes header `<vulkan/vulkan.h>`, which in turn
17597 includes `<windows.h>` on Windows. If you need some specific macros defined
17598 before including these headers (like `WIN32_LEAN_AND_MEAN` or
17599 `WINVER` for Windows, `VK_USE_PLATFORM_WIN32_KHR` for Vulkan), you must define
17600 them before every `#include` of this library.
17601 
17602 This library is written in C++, but has C-compatible interface.
17603 Thus you can include and use vk_mem_alloc.h in C or C++ code, but full
17604 implementation with `VMA_IMPLEMENTATION` macro must be compiled as C++, NOT as C.
17605 Some features of C++14 used. STL containers, RTTI, or C++ exceptions are not used.
17606 
17607 
17608 \section quick_start_initialization Initialization
17609 
17610 At program startup:
17611 
17612 -# Initialize Vulkan to have `VkPhysicalDevice`, `VkDevice` and `VkInstance` object.
17613 -# Fill VmaAllocatorCreateInfo structure and create #VmaAllocator object by
17614    calling vmaCreateAllocator().
17615 
17616 Only members `physicalDevice`, `device`, `instance` are required.
17617 However, you should inform the library which Vulkan version do you use by setting
17618 VmaAllocatorCreateInfo::vulkanApiVersion and which extensions did you enable
17619 by setting VmaAllocatorCreateInfo::flags (like #VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT for VK_KHR_buffer_device_address).
17620 Otherwise, VMA would use only features of Vulkan 1.0 core with no extensions.
17621 
17622 You may need to configure importing Vulkan functions. There are 3 ways to do this:
17623 
17624 -# **If you link with Vulkan static library** (e.g. "vulkan-1.lib" on Windows):
17625    - You don't need to do anything.
17626    - VMA will use these, as macro `VMA_STATIC_VULKAN_FUNCTIONS` is defined to 1 by default.
17627 -# **If you want VMA to fetch pointers to Vulkan functions dynamically** using `vkGetInstanceProcAddr`,
17628    `vkGetDeviceProcAddr` (this is the option presented in the example below):
17629    - Define `VMA_STATIC_VULKAN_FUNCTIONS` to 0, `VMA_DYNAMIC_VULKAN_FUNCTIONS` to 1.
17630    - Provide pointers to these two functions via VmaVulkanFunctions::vkGetInstanceProcAddr,
17631      VmaVulkanFunctions::vkGetDeviceProcAddr.
17632    - The library will fetch pointers to all other functions it needs internally.
17633 -# **If you fetch pointers to all Vulkan functions in a custom way**, e.g. using some loader like
17634    [Volk](https://github.com/zeux/volk):
17635    - Define `VMA_STATIC_VULKAN_FUNCTIONS` and `VMA_DYNAMIC_VULKAN_FUNCTIONS` to 0.
17636    - Pass these pointers via structure #VmaVulkanFunctions.
17637 
17638 \code
17639 VmaVulkanFunctions vulkanFunctions = {};
17640 vulkanFunctions.vkGetInstanceProcAddr = &vkGetInstanceProcAddr;
17641 vulkanFunctions.vkGetDeviceProcAddr = &vkGetDeviceProcAddr;
17642 
17643 VmaAllocatorCreateInfo allocatorCreateInfo = {};
17644 allocatorCreateInfo.vulkanApiVersion = VK_API_VERSION_1_2;
17645 allocatorCreateInfo.physicalDevice = physicalDevice;
17646 allocatorCreateInfo.device = device;
17647 allocatorCreateInfo.instance = instance;
17648 allocatorCreateInfo.pVulkanFunctions = &vulkanFunctions;
17649 
17650 VmaAllocator allocator;
17651 vmaCreateAllocator(&allocatorCreateInfo, &allocator);
17652 \endcode
17653 
17654 
17655 \section quick_start_resource_allocation Resource allocation
17656 
17657 When you want to create a buffer or image:
17658 
17659 -# Fill `VkBufferCreateInfo` / `VkImageCreateInfo` structure.
17660 -# Fill VmaAllocationCreateInfo structure.
17661 -# Call vmaCreateBuffer() / vmaCreateImage() to get `VkBuffer`/`VkImage` with memory
17662    already allocated and bound to it, plus #VmaAllocation objects that represents its underlying memory.
17663 
17664 \code
17665 VkBufferCreateInfo bufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
17666 bufferInfo.size = 65536;
17667 bufferInfo.usage = VK_BUFFER_USAGE_VERTEX_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
17668 
17669 VmaAllocationCreateInfo allocInfo = {};
17670 allocInfo.usage = VMA_MEMORY_USAGE_AUTO;
17671 
17672 VkBuffer buffer;
17673 VmaAllocation allocation;
17674 vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
17675 \endcode
17676 
17677 Don't forget to destroy your objects when no longer needed:
17678 
17679 \code
17680 vmaDestroyBuffer(allocator, buffer, allocation);
17681 vmaDestroyAllocator(allocator);
17682 \endcode
17683 
17684 
17685 \page choosing_memory_type Choosing memory type
17686 
17687 Physical devices in Vulkan support various combinations of memory heaps and
17688 types. Help with choosing correct and optimal memory type for your specific
17689 resource is one of the key features of this library. You can use it by filling
17690 appropriate members of VmaAllocationCreateInfo structure, as described below.
17691 You can also combine multiple methods.
17692 
17693 -# If you just want to find memory type index that meets your requirements, you
17694    can use function: vmaFindMemoryTypeIndexForBufferInfo(),
17695    vmaFindMemoryTypeIndexForImageInfo(), vmaFindMemoryTypeIndex().
17696 -# If you want to allocate a region of device memory without association with any
17697    specific image or buffer, you can use function vmaAllocateMemory(). Usage of
17698    this function is not recommended and usually not needed.
17699    vmaAllocateMemoryPages() function is also provided for creating multiple allocations at once,
17700    which may be useful for sparse binding.
17701 -# If you already have a buffer or an image created, you want to allocate memory
17702    for it and then you will bind it yourself, you can use function
17703    vmaAllocateMemoryForBuffer(), vmaAllocateMemoryForImage().
17704    For binding you should use functions: vmaBindBufferMemory(), vmaBindImageMemory()
17705    or their extended versions: vmaBindBufferMemory2(), vmaBindImageMemory2().
17706 -# **This is the easiest and recommended way to use this library:**
17707    If you want to create a buffer or an image, allocate memory for it and bind
17708    them together, all in one call, you can use function vmaCreateBuffer(),
17709    vmaCreateImage().
17710 
17711 When using 3. or 4., the library internally queries Vulkan for memory types
17712 supported for that buffer or image (function `vkGetBufferMemoryRequirements()`)
17713 and uses only one of these types.
17714 
17715 If no memory type can be found that meets all the requirements, these functions
17716 return `VK_ERROR_FEATURE_NOT_PRESENT`.
17717 
17718 You can leave VmaAllocationCreateInfo structure completely filled with zeros.
17719 It means no requirements are specified for memory type.
17720 It is valid, although not very useful.
17721 
17722 \section choosing_memory_type_usage Usage
17723 
17724 The easiest way to specify memory requirements is to fill member
17725 VmaAllocationCreateInfo::usage using one of the values of enum #VmaMemoryUsage.
17726 It defines high level, common usage types.
17727 Since version 3 of the library, it is recommended to use #VMA_MEMORY_USAGE_AUTO to let it select best memory type for your resource automatically.
17728 
17729 For example, if you want to create a uniform buffer that will be filled using
17730 transfer only once or infrequently and then used for rendering every frame as a uniform buffer, you can
17731 do it using following code. The buffer will most likely end up in a memory type with
17732 `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT` to be fast to access by the GPU device.
17733 
17734 \code
17735 VkBufferCreateInfo bufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
17736 bufferInfo.size = 65536;
17737 bufferInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
17738 
17739 VmaAllocationCreateInfo allocInfo = {};
17740 allocInfo.usage = VMA_MEMORY_USAGE_AUTO;
17741 
17742 VkBuffer buffer;
17743 VmaAllocation allocation;
17744 vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
17745 \endcode
17746 
17747 If you have a preference for putting the resource in GPU (device) memory or CPU (host) memory
17748 on systems with discrete graphics card that have the memories separate, you can use
17749 #VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE or #VMA_MEMORY_USAGE_AUTO_PREFER_HOST.
17750 
17751 When using `VMA_MEMORY_USAGE_AUTO*` while you want to map the allocated memory,
17752 you also need to specify one of the host access flags:
17753 #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
17754 This will help the library decide about preferred memory type to ensure it has `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`
17755 so you can map it.
17756 
17757 For example, a staging buffer that will be filled via mapped pointer and then
17758 used as a source of transfer to the buffer decribed previously can be created like this.
17759 It will likely and up in a memory type that is `HOST_VISIBLE` and `HOST_COHERENT`
17760 but not `HOST_CACHED` (meaning uncached, write-combined) and not `DEVICE_LOCAL` (meaning system RAM).
17761 
17762 \code
17763 VkBufferCreateInfo stagingBufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
17764 stagingBufferInfo.size = 65536;
17765 stagingBufferInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
17766 
17767 VmaAllocationCreateInfo stagingAllocInfo = {};
17768 stagingAllocInfo.usage = VMA_MEMORY_USAGE_AUTO;
17769 stagingAllocInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT;
17770 
17771 VkBuffer stagingBuffer;
17772 VmaAllocation stagingAllocation;
17773 vmaCreateBuffer(allocator, &stagingBufferInfo, &stagingAllocInfo, &stagingBuffer, &stagingAllocation, nullptr);
17774 \endcode
17775 
17776 For more examples of creating different kinds of resources, see chapter \ref usage_patterns.
17777 
17778 Usage values `VMA_MEMORY_USAGE_AUTO*` are legal to use only when the library knows
17779 about the resource being created by having `VkBufferCreateInfo` / `VkImageCreateInfo` passed,
17780 so they work with functions like: vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo() etc.
17781 If you allocate raw memory using function vmaAllocateMemory(), you have to use other means of selecting
17782 memory type, as decribed below.
17783 
17784 \note
17785 Old usage values (`VMA_MEMORY_USAGE_GPU_ONLY`, `VMA_MEMORY_USAGE_CPU_ONLY`,
17786 `VMA_MEMORY_USAGE_CPU_TO_GPU`, `VMA_MEMORY_USAGE_GPU_TO_CPU`, `VMA_MEMORY_USAGE_CPU_COPY`)
17787 are still available and work same way as in previous versions of the library
17788 for backward compatibility, but they are not recommended.
17789 
17790 \section choosing_memory_type_required_preferred_flags Required and preferred flags
17791 
17792 You can specify more detailed requirements by filling members
17793 VmaAllocationCreateInfo::requiredFlags and VmaAllocationCreateInfo::preferredFlags
17794 with a combination of bits from enum `VkMemoryPropertyFlags`. For example,
17795 if you want to create a buffer that will be persistently mapped on host (so it
17796 must be `HOST_VISIBLE`) and preferably will also be `HOST_COHERENT` and `HOST_CACHED`,
17797 use following code:
17798 
17799 \code
17800 VmaAllocationCreateInfo allocInfo = {};
17801 allocInfo.requiredFlags = VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
17802 allocInfo.preferredFlags = VK_MEMORY_PROPERTY_HOST_COHERENT_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
17803 allocInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT | VMA_ALLOCATION_CREATE_MAPPED_BIT;
17804 
17805 VkBuffer buffer;
17806 VmaAllocation allocation;
17807 vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
17808 \endcode
17809 
17810 A memory type is chosen that has all the required flags and as many preferred
17811 flags set as possible.
17812 
17813 Value passed in VmaAllocationCreateInfo::usage is internally converted to a set of required and preferred flags,
17814 plus some extra "magic" (heuristics).
17815 
17816 \section choosing_memory_type_explicit_memory_types Explicit memory types
17817 
17818 If you inspected memory types available on the physical device and you have
17819 a preference for memory types that you want to use, you can fill member
17820 VmaAllocationCreateInfo::memoryTypeBits. It is a bit mask, where each bit set
17821 means that a memory type with that index is allowed to be used for the
17822 allocation. Special value 0, just like `UINT32_MAX`, means there are no
17823 restrictions to memory type index.
17824 
17825 Please note that this member is NOT just a memory type index.
17826 Still you can use it to choose just one, specific memory type.
17827 For example, if you already determined that your buffer should be created in
17828 memory type 2, use following code:
17829 
17830 \code
17831 uint32_t memoryTypeIndex = 2;
17832 
17833 VmaAllocationCreateInfo allocInfo = {};
17834 allocInfo.memoryTypeBits = 1u << memoryTypeIndex;
17835 
17836 VkBuffer buffer;
17837 VmaAllocation allocation;
17838 vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
17839 \endcode
17840 
17841 
17842 \section choosing_memory_type_custom_memory_pools Custom memory pools
17843 
17844 If you allocate from custom memory pool, all the ways of specifying memory
17845 requirements described above are not applicable and the aforementioned members
17846 of VmaAllocationCreateInfo structure are ignored. Memory type is selected
17847 explicitly when creating the pool and then used to make all the allocations from
17848 that pool. For further details, see \ref custom_memory_pools.
17849 
17850 \section choosing_memory_type_dedicated_allocations Dedicated allocations
17851 
17852 Memory for allocations is reserved out of larger block of `VkDeviceMemory`
17853 allocated from Vulkan internally. That is the main feature of this whole library.
17854 You can still request a separate memory block to be created for an allocation,
17855 just like you would do in a trivial solution without using any allocator.
17856 In that case, a buffer or image is always bound to that memory at offset 0.
17857 This is called a "dedicated allocation".
17858 You can explicitly request it by using flag #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
17859 The library can also internally decide to use dedicated allocation in some cases, e.g.:
17860 
17861 - When the size of the allocation is large.
17862 - When [VK_KHR_dedicated_allocation](@ref vk_khr_dedicated_allocation) extension is enabled
17863   and it reports that dedicated allocation is required or recommended for the resource.
17864 - When allocation of next big memory block fails due to not enough device memory,
17865   but allocation with the exact requested size succeeds.
17866 
17867 
17868 \page memory_mapping Memory mapping
17869 
17870 To "map memory" in Vulkan means to obtain a CPU pointer to `VkDeviceMemory`,
17871 to be able to read from it or write to it in CPU code.
17872 Mapping is possible only of memory allocated from a memory type that has
17873 `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT` flag.
17874 Functions `vkMapMemory()`, `vkUnmapMemory()` are designed for this purpose.
17875 You can use them directly with memory allocated by this library,
17876 but it is not recommended because of following issue:
17877 Mapping the same `VkDeviceMemory` block multiple times is illegal - only one mapping at a time is allowed.
17878 This includes mapping disjoint regions. Mapping is not reference-counted internally by Vulkan.
17879 Because of this, Vulkan Memory Allocator provides following facilities:
17880 
17881 \note If you want to be able to map an allocation, you need to specify one of the flags
17882 #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
17883 in VmaAllocationCreateInfo::flags. These flags are required for an allocation to be mappable
17884 when using #VMA_MEMORY_USAGE_AUTO or other `VMA_MEMORY_USAGE_AUTO*` enum values.
17885 For other usage values they are ignored and every such allocation made in `HOST_VISIBLE` memory type is mappable,
17886 but they can still be used for consistency.
17887 
17888 \section memory_mapping_mapping_functions Mapping functions
17889 
17890 The library provides following functions for mapping of a specific #VmaAllocation: vmaMapMemory(), vmaUnmapMemory().
17891 They are safer and more convenient to use than standard Vulkan functions.
17892 You can map an allocation multiple times simultaneously - mapping is reference-counted internally.
17893 You can also map different allocations simultaneously regardless of whether they use the same `VkDeviceMemory` block.
17894 The way it is implemented is that the library always maps entire memory block, not just region of the allocation.
17895 For further details, see description of vmaMapMemory() function.
17896 Example:
17897 
17898 \code
17899 // Having these objects initialized:
17900 struct ConstantBuffer
17901 {
17902     ...
17903 };
17904 ConstantBuffer constantBufferData = ...
17905 
17906 VmaAllocator allocator = ...
17907 VkBuffer constantBuffer = ...
17908 VmaAllocation constantBufferAllocation = ...
17909 
17910 // You can map and fill your buffer using following code:
17911 
17912 void* mappedData;
17913 vmaMapMemory(allocator, constantBufferAllocation, &mappedData);
17914 memcpy(mappedData, &constantBufferData, sizeof(constantBufferData));
17915 vmaUnmapMemory(allocator, constantBufferAllocation);
17916 \endcode
17917 
17918 When mapping, you may see a warning from Vulkan validation layer similar to this one:
17919 
17920 <i>Mapping an image with layout VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL can result in undefined behavior if this memory is used by the device. Only GENERAL or PREINITIALIZED should be used.</i>
17921 
17922 It happens because the library maps entire `VkDeviceMemory` block, where different
17923 types of images and buffers may end up together, especially on GPUs with unified memory like Intel.
17924 You can safely ignore it if you are sure you access only memory of the intended
17925 object that you wanted to map.
17926 
17927 
17928 \section memory_mapping_persistently_mapped_memory Persistently mapped memory
17929 
17930 Kepping your memory persistently mapped is generally OK in Vulkan.
17931 You don't need to unmap it before using its data on the GPU.
17932 The library provides a special feature designed for that:
17933 Allocations made with #VMA_ALLOCATION_CREATE_MAPPED_BIT flag set in
17934 VmaAllocationCreateInfo::flags stay mapped all the time,
17935 so you can just access CPU pointer to it any time
17936 without a need to call any "map" or "unmap" function.
17937 Example:
17938 
17939 \code
17940 VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
17941 bufCreateInfo.size = sizeof(ConstantBuffer);
17942 bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
17943 
17944 VmaAllocationCreateInfo allocCreateInfo = {};
17945 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
17946 allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
17947     VMA_ALLOCATION_CREATE_MAPPED_BIT;
17948 
17949 VkBuffer buf;
17950 VmaAllocation alloc;
17951 VmaAllocationInfo allocInfo;
17952 vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
17953 
17954 // Buffer is already mapped. You can access its memory.
17955 memcpy(allocInfo.pMappedData, &constantBufferData, sizeof(constantBufferData));
17956 \endcode
17957 
17958 \note #VMA_ALLOCATION_CREATE_MAPPED_BIT by itself doesn't guarantee that the allocation will end up
17959 in a mappable memory type.
17960 For this, you need to also specify #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or
17961 #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
17962 #VMA_ALLOCATION_CREATE_MAPPED_BIT only guarantees that if the memory is `HOST_VISIBLE`, the allocation will be mapped on creation.
17963 For an example of how to make use of this fact, see section \ref usage_patterns_advanced_data_uploading.
17964 
17965 \section memory_mapping_cache_control Cache flush and invalidate
17966 
17967 Memory in Vulkan doesn't need to be unmapped before using it on GPU,
17968 but unless a memory types has `VK_MEMORY_PROPERTY_HOST_COHERENT_BIT` flag set,
17969 you need to manually **invalidate** cache before reading of mapped pointer
17970 and **flush** cache after writing to mapped pointer.
17971 Map/unmap operations don't do that automatically.
17972 Vulkan provides following functions for this purpose `vkFlushMappedMemoryRanges()`,
17973 `vkInvalidateMappedMemoryRanges()`, but this library provides more convenient
17974 functions that refer to given allocation object: vmaFlushAllocation(),
17975 vmaInvalidateAllocation(),
17976 or multiple objects at once: vmaFlushAllocations(), vmaInvalidateAllocations().
17977 
17978 Regions of memory specified for flush/invalidate must be aligned to
17979 `VkPhysicalDeviceLimits::nonCoherentAtomSize`. This is automatically ensured by the library.
17980 In any memory type that is `HOST_VISIBLE` but not `HOST_COHERENT`, all allocations
17981 within blocks are aligned to this value, so their offsets are always multiply of
17982 `nonCoherentAtomSize` and two different allocations never share same "line" of this size.
17983 
17984 Also, Windows drivers from all 3 PC GPU vendors (AMD, Intel, NVIDIA)
17985 currently provide `HOST_COHERENT` flag on all memory types that are
17986 `HOST_VISIBLE`, so on PC you may not need to bother.
17987 
17988 
17989 \page staying_within_budget Staying within budget
17990 
17991 When developing a graphics-intensive game or program, it is important to avoid allocating
17992 more GPU memory than it is physically available. When the memory is over-committed,
17993 various bad things can happen, depending on the specific GPU, graphics driver, and
17994 operating system:
17995 
17996 - It may just work without any problems.
17997 - The application may slow down because some memory blocks are moved to system RAM
17998   and the GPU has to access them through PCI Express bus.
17999 - A new allocation may take very long time to complete, even few seconds, and possibly
18000   freeze entire system.
18001 - The new allocation may fail with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
18002 - It may even result in GPU crash (TDR), observed as `VK_ERROR_DEVICE_LOST`
18003   returned somewhere later.
18004 
18005 \section staying_within_budget_querying_for_budget Querying for budget
18006 
18007 To query for current memory usage and available budget, use function vmaGetHeapBudgets().
18008 Returned structure #VmaBudget contains quantities expressed in bytes, per Vulkan memory heap.
18009 
18010 Please note that this function returns different information and works faster than
18011 vmaCalculateStatistics(). vmaGetHeapBudgets() can be called every frame or even before every
18012 allocation, while vmaCalculateStatistics() is intended to be used rarely,
18013 only to obtain statistical information, e.g. for debugging purposes.
18014 
18015 It is recommended to use <b>VK_EXT_memory_budget</b> device extension to obtain information
18016 about the budget from Vulkan device. VMA is able to use this extension automatically.
18017 When not enabled, the allocator behaves same way, but then it estimates current usage
18018 and available budget based on its internal information and Vulkan memory heap sizes,
18019 which may be less precise. In order to use this extension:
18020 
18021 1. Make sure extensions VK_EXT_memory_budget and VK_KHR_get_physical_device_properties2
18022    required by it are available and enable them. Please note that the first is a device
18023    extension and the second is instance extension!
18024 2. Use flag #VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT when creating #VmaAllocator object.
18025 3. Make sure to call vmaSetCurrentFrameIndex() every frame. Budget is queried from
18026    Vulkan inside of it to avoid overhead of querying it with every allocation.
18027 
18028 \section staying_within_budget_controlling_memory_usage Controlling memory usage
18029 
18030 There are many ways in which you can try to stay within the budget.
18031 
18032 First, when making new allocation requires allocating a new memory block, the library
18033 tries not to exceed the budget automatically. If a block with default recommended size
18034 (e.g. 256 MB) would go over budget, a smaller block is allocated, possibly even
18035 dedicated memory for just this resource.
18036 
18037 If the size of the requested resource plus current memory usage is more than the
18038 budget, by default the library still tries to create it, leaving it to the Vulkan
18039 implementation whether the allocation succeeds or fails. You can change this behavior
18040 by using #VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT flag. With it, the allocation is
18041 not made if it would exceed the budget or if the budget is already exceeded.
18042 VMA then tries to make the allocation from the next eligible Vulkan memory type.
18043 The all of them fail, the call then fails with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
18044 Example usage pattern may be to pass the #VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT flag
18045 when creating resources that are not essential for the application (e.g. the texture
18046 of a specific object) and not to pass it when creating critically important resources
18047 (e.g. render targets).
18048 
18049 On AMD graphics cards there is a custom vendor extension available: <b>VK_AMD_memory_overallocation_behavior</b>
18050 that allows to control the behavior of the Vulkan implementation in out-of-memory cases -
18051 whether it should fail with an error code or still allow the allocation.
18052 Usage of this extension involves only passing extra structure on Vulkan device creation,
18053 so it is out of scope of this library.
18054 
18055 Finally, you can also use #VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT flag to make sure
18056 a new allocation is created only when it fits inside one of the existing memory blocks.
18057 If it would require to allocate a new block, if fails instead with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
18058 This also ensures that the function call is very fast because it never goes to Vulkan
18059 to obtain a new block.
18060 
18061 \note Creating \ref custom_memory_pools with VmaPoolCreateInfo::minBlockCount
18062 set to more than 0 will currently try to allocate memory blocks without checking whether they
18063 fit within budget.
18064 
18065 
18066 \page resource_aliasing Resource aliasing (overlap)
18067 
18068 New explicit graphics APIs (Vulkan and Direct3D 12), thanks to manual memory
18069 management, give an opportunity to alias (overlap) multiple resources in the
18070 same region of memory - a feature not available in the old APIs (Direct3D 11, OpenGL).
18071 It can be useful to save video memory, but it must be used with caution.
18072 
18073 For example, if you know the flow of your whole render frame in advance, you
18074 are going to use some intermediate textures or buffers only during a small range of render passes,
18075 and you know these ranges don't overlap in time, you can bind these resources to
18076 the same place in memory, even if they have completely different parameters (width, height, format etc.).
18077 
18078 ![Resource aliasing (overlap)](../gfx/Aliasing.png)
18079 
18080 Such scenario is possible using VMA, but you need to create your images manually.
18081 Then you need to calculate parameters of an allocation to be made using formula:
18082 
18083 - allocation size = max(size of each image)
18084 - allocation alignment = max(alignment of each image)
18085 - allocation memoryTypeBits = bitwise AND(memoryTypeBits of each image)
18086 
18087 Following example shows two different images bound to the same place in memory,
18088 allocated to fit largest of them.
18089 
18090 \code
18091 // A 512x512 texture to be sampled.
18092 VkImageCreateInfo img1CreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
18093 img1CreateInfo.imageType = VK_IMAGE_TYPE_2D;
18094 img1CreateInfo.extent.width = 512;
18095 img1CreateInfo.extent.height = 512;
18096 img1CreateInfo.extent.depth = 1;
18097 img1CreateInfo.mipLevels = 10;
18098 img1CreateInfo.arrayLayers = 1;
18099 img1CreateInfo.format = VK_FORMAT_R8G8B8A8_SRGB;
18100 img1CreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
18101 img1CreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
18102 img1CreateInfo.usage = VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_SAMPLED_BIT;
18103 img1CreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
18104 
18105 // A full screen texture to be used as color attachment.
18106 VkImageCreateInfo img2CreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
18107 img2CreateInfo.imageType = VK_IMAGE_TYPE_2D;
18108 img2CreateInfo.extent.width = 1920;
18109 img2CreateInfo.extent.height = 1080;
18110 img2CreateInfo.extent.depth = 1;
18111 img2CreateInfo.mipLevels = 1;
18112 img2CreateInfo.arrayLayers = 1;
18113 img2CreateInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
18114 img2CreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
18115 img2CreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
18116 img2CreateInfo.usage = VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
18117 img2CreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
18118 
18119 VkImage img1;
18120 res = vkCreateImage(device, &img1CreateInfo, nullptr, &img1);
18121 VkImage img2;
18122 res = vkCreateImage(device, &img2CreateInfo, nullptr, &img2);
18123 
18124 VkMemoryRequirements img1MemReq;
18125 vkGetImageMemoryRequirements(device, img1, &img1MemReq);
18126 VkMemoryRequirements img2MemReq;
18127 vkGetImageMemoryRequirements(device, img2, &img2MemReq);
18128 
18129 VkMemoryRequirements finalMemReq = {};
18130 finalMemReq.size = std::max(img1MemReq.size, img2MemReq.size);
18131 finalMemReq.alignment = std::max(img1MemReq.alignment, img2MemReq.alignment);
18132 finalMemReq.memoryTypeBits = img1MemReq.memoryTypeBits & img2MemReq.memoryTypeBits;
18133 // Validate if(finalMemReq.memoryTypeBits != 0)
18134 
18135 VmaAllocationCreateInfo allocCreateInfo = {};
18136 allocCreateInfo.preferredFlags = VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
18137 
18138 VmaAllocation alloc;
18139 res = vmaAllocateMemory(allocator, &finalMemReq, &allocCreateInfo, &alloc, nullptr);
18140 
18141 res = vmaBindImageMemory(allocator, alloc, img1);
18142 res = vmaBindImageMemory(allocator, alloc, img2);
18143 
18144 // You can use img1, img2 here, but not at the same time!
18145 
18146 vmaFreeMemory(allocator, alloc);
18147 vkDestroyImage(allocator, img2, nullptr);
18148 vkDestroyImage(allocator, img1, nullptr);
18149 \endcode
18150 
18151 Remember that using resources that alias in memory requires proper synchronization.
18152 You need to issue a memory barrier to make sure commands that use `img1` and `img2`
18153 don't overlap on GPU timeline.
18154 You also need to treat a resource after aliasing as uninitialized - containing garbage data.
18155 For example, if you use `img1` and then want to use `img2`, you need to issue
18156 an image memory barrier for `img2` with `oldLayout` = `VK_IMAGE_LAYOUT_UNDEFINED`.
18157 
18158 Additional considerations:
18159 
18160 - Vulkan also allows to interpret contents of memory between aliasing resources consistently in some cases.
18161 See chapter 11.8. "Memory Aliasing" of Vulkan specification or `VK_IMAGE_CREATE_ALIAS_BIT` flag.
18162 - You can create more complex layout where different images and buffers are bound
18163 at different offsets inside one large allocation. For example, one can imagine
18164 a big texture used in some render passes, aliasing with a set of many small buffers
18165 used between in some further passes. To bind a resource at non-zero offset in an allocation,
18166 use vmaBindBufferMemory2() / vmaBindImageMemory2().
18167 - Before allocating memory for the resources you want to alias, check `memoryTypeBits`
18168 returned in memory requirements of each resource to make sure the bits overlap.
18169 Some GPUs may expose multiple memory types suitable e.g. only for buffers or
18170 images with `COLOR_ATTACHMENT` usage, so the sets of memory types supported by your
18171 resources may be disjoint. Aliasing them is not possible in that case.
18172 
18173 
18174 \page custom_memory_pools Custom memory pools
18175 
18176 A memory pool contains a number of `VkDeviceMemory` blocks.
18177 The library automatically creates and manages default pool for each memory type available on the device.
18178 Default memory pool automatically grows in size.
18179 Size of allocated blocks is also variable and managed automatically.
18180 
18181 You can create custom pool and allocate memory out of it.
18182 It can be useful if you want to:
18183 
18184 - Keep certain kind of allocations separate from others.
18185 - Enforce particular, fixed size of Vulkan memory blocks.
18186 - Limit maximum amount of Vulkan memory allocated for that pool.
18187 - Reserve minimum or fixed amount of Vulkan memory always preallocated for that pool.
18188 - Use extra parameters for a set of your allocations that are available in #VmaPoolCreateInfo but not in
18189   #VmaAllocationCreateInfo - e.g., custom minimum alignment, custom `pNext` chain.
18190 - Perform defragmentation on a specific subset of your allocations.
18191 
18192 To use custom memory pools:
18193 
18194 -# Fill VmaPoolCreateInfo structure.
18195 -# Call vmaCreatePool() to obtain #VmaPool handle.
18196 -# When making an allocation, set VmaAllocationCreateInfo::pool to this handle.
18197    You don't need to specify any other parameters of this structure, like `usage`.
18198 
18199 Example:
18200 
18201 \code
18202 // Find memoryTypeIndex for the pool.
18203 VkBufferCreateInfo sampleBufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
18204 sampleBufCreateInfo.size = 0x10000; // Doesn't matter.
18205 sampleBufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
18206 
18207 VmaAllocationCreateInfo sampleAllocCreateInfo = {};
18208 sampleAllocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
18209 
18210 uint32_t memTypeIndex;
18211 VkResult res = vmaFindMemoryTypeIndexForBufferInfo(allocator,
18212     &sampleBufCreateInfo, &sampleAllocCreateInfo, &memTypeIndex);
18213 // Check res...
18214 
18215 // Create a pool that can have at most 2 blocks, 128 MiB each.
18216 VmaPoolCreateInfo poolCreateInfo = {};
18217 poolCreateInfo.memoryTypeIndex = memTypeIndex;
18218 poolCreateInfo.blockSize = 128ull * 1024 * 1024;
18219 poolCreateInfo.maxBlockCount = 2;
18220 
18221 VmaPool pool;
18222 res = vmaCreatePool(allocator, &poolCreateInfo, &pool);
18223 // Check res...
18224 
18225 // Allocate a buffer out of it.
18226 VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
18227 bufCreateInfo.size = 1024;
18228 bufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
18229 
18230 VmaAllocationCreateInfo allocCreateInfo = {};
18231 allocCreateInfo.pool = pool;
18232 
18233 VkBuffer buf;
18234 VmaAllocation alloc;
18235 res = vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, nullptr);
18236 // Check res...
18237 \endcode
18238 
18239 You have to free all allocations made from this pool before destroying it.
18240 
18241 \code
18242 vmaDestroyBuffer(allocator, buf, alloc);
18243 vmaDestroyPool(allocator, pool);
18244 \endcode
18245 
18246 New versions of this library support creating dedicated allocations in custom pools.
18247 It is supported only when VmaPoolCreateInfo::blockSize = 0.
18248 To use this feature, set VmaAllocationCreateInfo::pool to the pointer to your custom pool and
18249 VmaAllocationCreateInfo::flags to #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
18250 
18251 \note Excessive use of custom pools is a common mistake when using this library.
18252 Custom pools may be useful for special purposes - when you want to
18253 keep certain type of resources separate e.g. to reserve minimum amount of memory
18254 for them or limit maximum amount of memory they can occupy. For most
18255 resources this is not needed and so it is not recommended to create #VmaPool
18256 objects and allocations out of them. Allocating from the default pool is sufficient.
18257 
18258 
18259 \section custom_memory_pools_MemTypeIndex Choosing memory type index
18260 
18261 When creating a pool, you must explicitly specify memory type index.
18262 To find the one suitable for your buffers or images, you can use helper functions
18263 vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo().
18264 You need to provide structures with example parameters of buffers or images
18265 that you are going to create in that pool.
18266 
18267 \code
18268 VkBufferCreateInfo exampleBufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
18269 exampleBufCreateInfo.size = 1024; // Doesn't matter
18270 exampleBufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
18271 
18272 VmaAllocationCreateInfo allocCreateInfo = {};
18273 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
18274 
18275 uint32_t memTypeIndex;
18276 vmaFindMemoryTypeIndexForBufferInfo(allocator, &exampleBufCreateInfo, &allocCreateInfo, &memTypeIndex);
18277 
18278 VmaPoolCreateInfo poolCreateInfo = {};
18279 poolCreateInfo.memoryTypeIndex = memTypeIndex;
18280 // ...
18281 \endcode
18282 
18283 When creating buffers/images allocated in that pool, provide following parameters:
18284 
18285 - `VkBufferCreateInfo`: Prefer to pass same parameters as above.
18286   Otherwise you risk creating resources in a memory type that is not suitable for them, which may result in undefined behavior.
18287   Using different `VK_BUFFER_USAGE_` flags may work, but you shouldn't create images in a pool intended for buffers
18288   or the other way around.
18289 - VmaAllocationCreateInfo: You don't need to pass same parameters. Fill only `pool` member.
18290   Other members are ignored anyway.
18291 
18292 \section linear_algorithm Linear allocation algorithm
18293 
18294 Each Vulkan memory block managed by this library has accompanying metadata that
18295 keeps track of used and unused regions. By default, the metadata structure and
18296 algorithm tries to find best place for new allocations among free regions to
18297 optimize memory usage. This way you can allocate and free objects in any order.
18298 
18299 ![Default allocation algorithm](../gfx/Linear_allocator_1_algo_default.png)
18300 
18301 Sometimes there is a need to use simpler, linear allocation algorithm. You can
18302 create custom pool that uses such algorithm by adding flag
18303 #VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT to VmaPoolCreateInfo::flags while creating
18304 #VmaPool object. Then an alternative metadata management is used. It always
18305 creates new allocations after last one and doesn't reuse free regions after
18306 allocations freed in the middle. It results in better allocation performance and
18307 less memory consumed by metadata.
18308 
18309 ![Linear allocation algorithm](../gfx/Linear_allocator_2_algo_linear.png)
18310 
18311 With this one flag, you can create a custom pool that can be used in many ways:
18312 free-at-once, stack, double stack, and ring buffer. See below for details.
18313 You don't need to specify explicitly which of these options you are going to use - it is detected automatically.
18314 
18315 \subsection linear_algorithm_free_at_once Free-at-once
18316 
18317 In a pool that uses linear algorithm, you still need to free all the allocations
18318 individually, e.g. by using vmaFreeMemory() or vmaDestroyBuffer(). You can free
18319 them in any order. New allocations are always made after last one - free space
18320 in the middle is not reused. However, when you release all the allocation and
18321 the pool becomes empty, allocation starts from the beginning again. This way you
18322 can use linear algorithm to speed up creation of allocations that you are going
18323 to release all at once.
18324 
18325 ![Free-at-once](../gfx/Linear_allocator_3_free_at_once.png)
18326 
18327 This mode is also available for pools created with VmaPoolCreateInfo::maxBlockCount
18328 value that allows multiple memory blocks.
18329 
18330 \subsection linear_algorithm_stack Stack
18331 
18332 When you free an allocation that was created last, its space can be reused.
18333 Thanks to this, if you always release allocations in the order opposite to their
18334 creation (LIFO - Last In First Out), you can achieve behavior of a stack.
18335 
18336 ![Stack](../gfx/Linear_allocator_4_stack.png)
18337 
18338 This mode is also available for pools created with VmaPoolCreateInfo::maxBlockCount
18339 value that allows multiple memory blocks.
18340 
18341 \subsection linear_algorithm_double_stack Double stack
18342 
18343 The space reserved by a custom pool with linear algorithm may be used by two
18344 stacks:
18345 
18346 - First, default one, growing up from offset 0.
18347 - Second, "upper" one, growing down from the end towards lower offsets.
18348 
18349 To make allocation from the upper stack, add flag #VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT
18350 to VmaAllocationCreateInfo::flags.
18351 
18352 ![Double stack](../gfx/Linear_allocator_7_double_stack.png)
18353 
18354 Double stack is available only in pools with one memory block -
18355 VmaPoolCreateInfo::maxBlockCount must be 1. Otherwise behavior is undefined.
18356 
18357 When the two stacks' ends meet so there is not enough space between them for a
18358 new allocation, such allocation fails with usual
18359 `VK_ERROR_OUT_OF_DEVICE_MEMORY` error.
18360 
18361 \subsection linear_algorithm_ring_buffer Ring buffer
18362 
18363 When you free some allocations from the beginning and there is not enough free space
18364 for a new one at the end of a pool, allocator's "cursor" wraps around to the
18365 beginning and starts allocation there. Thanks to this, if you always release
18366 allocations in the same order as you created them (FIFO - First In First Out),
18367 you can achieve behavior of a ring buffer / queue.
18368 
18369 ![Ring buffer](../gfx/Linear_allocator_5_ring_buffer.png)
18370 
18371 Ring buffer is available only in pools with one memory block -
18372 VmaPoolCreateInfo::maxBlockCount must be 1. Otherwise behavior is undefined.
18373 
18374 \note \ref defragmentation is not supported in custom pools created with #VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT.
18375 
18376 
18377 \page defragmentation Defragmentation
18378 
18379 Interleaved allocations and deallocations of many objects of varying size can
18380 cause fragmentation over time, which can lead to a situation where the library is unable
18381 to find a continuous range of free memory for a new allocation despite there is
18382 enough free space, just scattered across many small free ranges between existing
18383 allocations.
18384 
18385 To mitigate this problem, you can use defragmentation feature.
18386 It doesn't happen automatically though and needs your cooperation,
18387 because VMA is a low level library that only allocates memory.
18388 It cannot recreate buffers and images in a new place as it doesn't remember the contents of `VkBufferCreateInfo` / `VkImageCreateInfo` structures.
18389 It cannot copy their contents as it doesn't record any commands to a command buffer.
18390 
18391 Example:
18392 
18393 \code
18394 VmaDefragmentationInfo defragInfo = {};
18395 defragInfo.pool = myPool;
18396 defragInfo.flags = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT;
18397 
18398 VmaDefragmentationContext defragCtx;
18399 VkResult res = vmaBeginDefragmentation(allocator, &defragInfo, &defragCtx);
18400 // Check res...
18401 
18402 for(;;)
18403 {
18404     VmaDefragmentationPassMoveInfo pass;
18405     res = vmaBeginDefragmentationPass(allocator, defragCtx, &pass);
18406     if(res == VK_SUCCESS)
18407         break;
18408     else if(res != VK_INCOMPLETE)
18409         // Handle error...
18410 
18411     for(uint32_t i = 0; i < pass.moveCount; ++i)
18412     {
18413         // Inspect pass.pMoves[i].srcAllocation, identify what buffer/image it represents.
18414         VmaAllocationInfo allocInfo;
18415         vmaGetAllocationInfo(allocator, pMoves[i].srcAllocation, &allocInfo);
18416         MyEngineResourceData* resData = (MyEngineResourceData*)allocInfo.pUserData;
18417 
18418         // Recreate and bind this buffer/image at: pass.pMoves[i].dstMemory, pass.pMoves[i].dstOffset.
18419         VkImageCreateInfo imgCreateInfo = ...
18420         VkImage newImg;
18421         res = vkCreateImage(device, &imgCreateInfo, nullptr, &newImg);
18422         // Check res...
18423         res = vmaBindImageMemory(allocator, pMoves[i].dstTmpAllocation, newImg);
18424         // Check res...
18425 
18426         // Issue a vkCmdCopyBuffer/vkCmdCopyImage to copy its content to the new place.
18427         vkCmdCopyImage(cmdBuf, resData->img, ..., newImg, ...);
18428     }
18429 
18430     // Make sure the copy commands finished executing.
18431     vkWaitForFences(...);
18432 
18433     // Destroy old buffers/images bound with pass.pMoves[i].srcAllocation.
18434     for(uint32_t i = 0; i < pass.moveCount; ++i)
18435     {
18436         // ...
18437         vkDestroyImage(device, resData->img, nullptr);
18438     }
18439 
18440     // Update appropriate descriptors to point to the new places...
18441 
18442     res = vmaEndDefragmentationPass(allocator, defragCtx, &pass);
18443     if(res == VK_SUCCESS)
18444         break;
18445     else if(res != VK_INCOMPLETE)
18446         // Handle error...
18447 }
18448 
18449 vmaEndDefragmentation(allocator, defragCtx, nullptr);
18450 \endcode
18451 
18452 Although functions like vmaCreateBuffer(), vmaCreateImage(), vmaDestroyBuffer(), vmaDestroyImage()
18453 create/destroy an allocation and a buffer/image at once, these are just a shortcut for
18454 creating the resource, allocating memory, and binding them together.
18455 Defragmentation works on memory allocations only. You must handle the rest manually.
18456 Defragmentation is an iterative process that should repreat "passes" as long as related functions
18457 return `VK_INCOMPLETE` not `VK_SUCCESS`.
18458 In each pass:
18459 
18460 1. vmaBeginDefragmentationPass() function call:
18461    - Calculates and returns the list of allocations to be moved in this pass.
18462      Note this can be a time-consuming process.
18463    - Reserves destination memory for them by creating temporary destination allocations
18464      that you can query for their `VkDeviceMemory` + offset using vmaGetAllocationInfo().
18465 2. Inside the pass, **you should**:
18466    - Inspect the returned list of allocations to be moved.
18467    - Create new buffers/images and bind them at the returned destination temporary allocations.
18468    - Copy data from source to destination resources if necessary.
18469    - Destroy the source buffers/images, but NOT their allocations.
18470 3. vmaEndDefragmentationPass() function call:
18471    - Frees the source memory reserved for the allocations that are moved.
18472    - Modifies source #VmaAllocation objects that are moved to point to the destination reserved memory.
18473    - Frees `VkDeviceMemory` blocks that became empty.
18474 
18475 Unlike in previous iterations of the defragmentation API, there is no list of "movable" allocations passed as a parameter.
18476 Defragmentation algorithm tries to move all suitable allocations.
18477 You can, however, refuse to move some of them inside a defragmentation pass, by setting
18478 `pass.pMoves[i].operation` to #VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE.
18479 This is not recommended and may result in suboptimal packing of the allocations after defragmentation.
18480 If you cannot ensure any allocation can be moved, it is better to keep movable allocations separate in a custom pool.
18481 
18482 Inside a pass, for each allocation that should be moved:
18483 
18484 - You should copy its data from the source to the destination place by calling e.g. `vkCmdCopyBuffer()`, `vkCmdCopyImage()`.
18485   - You need to make sure these commands finished executing before destroying the source buffers/images and before calling vmaEndDefragmentationPass().
18486 - If a resource doesn't contain any meaningful data, e.g. it is a transient color attachment image to be cleared,
18487   filled, and used temporarily in each rendering frame, you can just recreate this image
18488   without copying its data.
18489 - If the resource is in `HOST_VISIBLE` and `HOST_CACHED` memory, you can copy its data on the CPU
18490   using `memcpy()`.
18491 - If you cannot move the allocation, you can set `pass.pMoves[i].operation` to #VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE.
18492   This will cancel the move.
18493   - vmaEndDefragmentationPass() will then free the destination memory
18494     not the source memory of the allocation, leaving it unchanged.
18495 - If you decide the allocation is unimportant and can be destroyed instead of moved (e.g. it wasn't used for long time),
18496   you can set `pass.pMoves[i].operation` to #VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY.
18497   - vmaEndDefragmentationPass() will then free both source and destination memory, and will destroy the source #VmaAllocation object.
18498 
18499 You can defragment a specific custom pool by setting VmaDefragmentationInfo::pool
18500 (like in the example above) or all the default pools by setting this member to null.
18501 
18502 Defragmentation is always performed in each pool separately.
18503 Allocations are never moved between different Vulkan memory types.
18504 The size of the destination memory reserved for a moved allocation is the same as the original one.
18505 Alignment of an allocation as it was determined using `vkGetBufferMemoryRequirements()` etc. is also respected after defragmentation.
18506 Buffers/images should be recreated with the same `VkBufferCreateInfo` / `VkImageCreateInfo` parameters as the original ones.
18507 
18508 You can perform the defragmentation incrementally to limit the number of allocations and bytes to be moved
18509 in each pass, e.g. to call it in sync with render frames and not to experience too big hitches.
18510 See members: VmaDefragmentationInfo::maxBytesPerPass, VmaDefragmentationInfo::maxAllocationsPerPass.
18511 
18512 It is also safe to perform the defragmentation asynchronously to render frames and other Vulkan and VMA
18513 usage, possibly from multiple threads, with the exception that allocations
18514 returned in VmaDefragmentationPassMoveInfo::pMoves shouldn't be destroyed until the defragmentation pass is ended.
18515 
18516 <b>Mapping</b> is preserved on allocations that are moved during defragmentation.
18517 Whether through #VMA_ALLOCATION_CREATE_MAPPED_BIT or vmaMapMemory(), the allocations
18518 are mapped at their new place. Of course, pointer to the mapped data changes, so it needs to be queried
18519 using VmaAllocationInfo::pMappedData.
18520 
18521 \note Defragmentation is not supported in custom pools created with #VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT.
18522 
18523 
18524 \page statistics Statistics
18525 
18526 This library contains several functions that return information about its internal state,
18527 especially the amount of memory allocated from Vulkan.
18528 
18529 \section statistics_numeric_statistics Numeric statistics
18530 
18531 If you need to obtain basic statistics about memory usage per heap, together with current budget,
18532 you can call function vmaGetHeapBudgets() and inspect structure #VmaBudget.
18533 This is useful to keep track of memory usage and stay withing budget
18534 (see also \ref staying_within_budget).
18535 Example:
18536 
18537 \code
18538 uint32_t heapIndex = ...
18539 
18540 VmaBudget budgets[VK_MAX_MEMORY_HEAPS];
18541 vmaGetHeapBudgets(allocator, budgets);
18542 
18543 printf("My heap currently has %u allocations taking %llu B,\n",
18544     budgets[heapIndex].statistics.allocationCount,
18545     budgets[heapIndex].statistics.allocationBytes);
18546 printf("allocated out of %u Vulkan device memory blocks taking %llu B,\n",
18547     budgets[heapIndex].statistics.blockCount,
18548     budgets[heapIndex].statistics.blockBytes);
18549 printf("Vulkan reports total usage %llu B with budget %llu B.\n",
18550     budgets[heapIndex].usage,
18551     budgets[heapIndex].budget);
18552 \endcode
18553 
18554 You can query for more detailed statistics per memory heap, type, and totals,
18555 including minimum and maximum allocation size and unused range size,
18556 by calling function vmaCalculateStatistics() and inspecting structure #VmaTotalStatistics.
18557 This function is slower though, as it has to traverse all the internal data structures,
18558 so it should be used only for debugging purposes.
18559 
18560 You can query for statistics of a custom pool using function vmaGetPoolStatistics()
18561 or vmaCalculatePoolStatistics().
18562 
18563 You can query for information about a specific allocation using function vmaGetAllocationInfo().
18564 It fill structure #VmaAllocationInfo.
18565 
18566 \section statistics_json_dump JSON dump
18567 
18568 You can dump internal state of the allocator to a string in JSON format using function vmaBuildStatsString().
18569 The result is guaranteed to be correct JSON.
18570 It uses ANSI encoding.
18571 Any strings provided by user (see [Allocation names](@ref allocation_names))
18572 are copied as-is and properly escaped for JSON, so if they use UTF-8, ISO-8859-2 or any other encoding,
18573 this JSON string can be treated as using this encoding.
18574 It must be freed using function vmaFreeStatsString().
18575 
18576 The format of this JSON string is not part of official documentation of the library,
18577 but it will not change in backward-incompatible way without increasing library major version number
18578 and appropriate mention in changelog.
18579 
18580 The JSON string contains all the data that can be obtained using vmaCalculateStatistics().
18581 It can also contain detailed map of allocated memory blocks and their regions -
18582 free and occupied by allocations.
18583 This allows e.g. to visualize the memory or assess fragmentation.
18584 
18585 
18586 \page allocation_annotation Allocation names and user data
18587 
18588 \section allocation_user_data Allocation user data
18589 
18590 You can annotate allocations with your own information, e.g. for debugging purposes.
18591 To do that, fill VmaAllocationCreateInfo::pUserData field when creating
18592 an allocation. It is an opaque `void*` pointer. You can use it e.g. as a pointer,
18593 some handle, index, key, ordinal number or any other value that would associate
18594 the allocation with your custom metadata.
18595 It it useful to identify appropriate data structures in your engine given #VmaAllocation,
18596 e.g. when doing \ref defragmentation.
18597 
18598 \code
18599 VkBufferCreateInfo bufCreateInfo = ...
18600 
18601 MyBufferMetadata* pMetadata = CreateBufferMetadata();
18602 
18603 VmaAllocationCreateInfo allocCreateInfo = {};
18604 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
18605 allocCreateInfo.pUserData = pMetadata;
18606 
18607 VkBuffer buffer;
18608 VmaAllocation allocation;
18609 vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buffer, &allocation, nullptr);
18610 \endcode
18611 
18612 The pointer may be later retrieved as VmaAllocationInfo::pUserData:
18613 
18614 \code
18615 VmaAllocationInfo allocInfo;
18616 vmaGetAllocationInfo(allocator, allocation, &allocInfo);
18617 MyBufferMetadata* pMetadata = (MyBufferMetadata*)allocInfo.pUserData;
18618 \endcode
18619 
18620 It can also be changed using function vmaSetAllocationUserData().
18621 
18622 Values of (non-zero) allocations' `pUserData` are printed in JSON report created by
18623 vmaBuildStatsString() in hexadecimal form.
18624 
18625 \section allocation_names Allocation names
18626 
18627 An allocation can also carry a null-terminated string, giving a name to the allocation.
18628 To set it, call vmaSetAllocationName().
18629 The library creates internal copy of the string, so the pointer you pass doesn't need
18630 to be valid for whole lifetime of the allocation. You can free it after the call.
18631 
18632 \code
18633 std::string imageName = "Texture: ";
18634 imageName += fileName;
18635 vmaSetAllocationName(allocator, allocation, imageName.c_str());
18636 \endcode
18637 
18638 The string can be later retrieved by inspecting VmaAllocationInfo::pName.
18639 It is also printed in JSON report created by vmaBuildStatsString().
18640 
18641 \note Setting string name to VMA allocation doesn't automatically set it to the Vulkan buffer or image created with it.
18642 You must do it manually using an extension like VK_EXT_debug_utils, which is independent of this library.
18643 
18644 
18645 \page virtual_allocator Virtual allocator
18646 
18647 As an extra feature, the core allocation algorithm of the library is exposed through a simple and convenient API of "virtual allocator".
18648 It doesn't allocate any real GPU memory. It just keeps track of used and free regions of a "virtual block".
18649 You can use it to allocate your own memory or other objects, even completely unrelated to Vulkan.
18650 A common use case is sub-allocation of pieces of one large GPU buffer.
18651 
18652 \section virtual_allocator_creating_virtual_block Creating virtual block
18653 
18654 To use this functionality, there is no main "allocator" object.
18655 You don't need to have #VmaAllocator object created.
18656 All you need to do is to create a separate #VmaVirtualBlock object for each block of memory you want to be managed by the allocator:
18657 
18658 -# Fill in #VmaVirtualBlockCreateInfo structure.
18659 -# Call vmaCreateVirtualBlock(). Get new #VmaVirtualBlock object.
18660 
18661 Example:
18662 
18663 \code
18664 VmaVirtualBlockCreateInfo blockCreateInfo = {};
18665 blockCreateInfo.size = 1048576; // 1 MB
18666 
18667 VmaVirtualBlock block;
18668 VkResult res = vmaCreateVirtualBlock(&blockCreateInfo, &block);
18669 \endcode
18670 
18671 \section virtual_allocator_making_virtual_allocations Making virtual allocations
18672 
18673 #VmaVirtualBlock object contains internal data structure that keeps track of free and occupied regions
18674 using the same code as the main Vulkan memory allocator.
18675 Similarly to #VmaAllocation for standard GPU allocations, there is #VmaVirtualAllocation type
18676 that represents an opaque handle to an allocation withing the virtual block.
18677 
18678 In order to make such allocation:
18679 
18680 -# Fill in #VmaVirtualAllocationCreateInfo structure.
18681 -# Call vmaVirtualAllocate(). Get new #VmaVirtualAllocation object that represents the allocation.
18682    You can also receive `VkDeviceSize offset` that was assigned to the allocation.
18683 
18684 Example:
18685 
18686 \code
18687 VmaVirtualAllocationCreateInfo allocCreateInfo = {};
18688 allocCreateInfo.size = 4096; // 4 KB
18689 
18690 VmaVirtualAllocation alloc;
18691 VkDeviceSize offset;
18692 res = vmaVirtualAllocate(block, &allocCreateInfo, &alloc, &offset);
18693 if(res == VK_SUCCESS)
18694 {
18695     // Use the 4 KB of your memory starting at offset.
18696 }
18697 else
18698 {
18699     // Allocation failed - no space for it could be found. Handle this error!
18700 }
18701 \endcode
18702 
18703 \section virtual_allocator_deallocation Deallocation
18704 
18705 When no longer needed, an allocation can be freed by calling vmaVirtualFree().
18706 You can only pass to this function an allocation that was previously returned by vmaVirtualAllocate()
18707 called for the same #VmaVirtualBlock.
18708 
18709 When whole block is no longer needed, the block object can be released by calling vmaDestroyVirtualBlock().
18710 All allocations must be freed before the block is destroyed, which is checked internally by an assert.
18711 However, if you don't want to call vmaVirtualFree() for each allocation, you can use vmaClearVirtualBlock() to free them all at once -
18712 a feature not available in normal Vulkan memory allocator. Example:
18713 
18714 \code
18715 vmaVirtualFree(block, alloc);
18716 vmaDestroyVirtualBlock(block);
18717 \endcode
18718 
18719 \section virtual_allocator_allocation_parameters Allocation parameters
18720 
18721 You can attach a custom pointer to each allocation by using vmaSetVirtualAllocationUserData().
18722 Its default value is null.
18723 It can be used to store any data that needs to be associated with that allocation - e.g. an index, a handle, or a pointer to some
18724 larger data structure containing more information. Example:
18725 
18726 \code
18727 struct CustomAllocData
18728 {
18729     std::string m_AllocName;
18730 };
18731 CustomAllocData* allocData = new CustomAllocData();
18732 allocData->m_AllocName = "My allocation 1";
18733 vmaSetVirtualAllocationUserData(block, alloc, allocData);
18734 \endcode
18735 
18736 The pointer can later be fetched, along with allocation offset and size, by passing the allocation handle to function
18737 vmaGetVirtualAllocationInfo() and inspecting returned structure #VmaVirtualAllocationInfo.
18738 If you allocated a new object to be used as the custom pointer, don't forget to delete that object before freeing the allocation!
18739 Example:
18740 
18741 \code
18742 VmaVirtualAllocationInfo allocInfo;
18743 vmaGetVirtualAllocationInfo(block, alloc, &allocInfo);
18744 delete (CustomAllocData*)allocInfo.pUserData;
18745 
18746 vmaVirtualFree(block, alloc);
18747 \endcode
18748 
18749 \section virtual_allocator_alignment_and_units Alignment and units
18750 
18751 It feels natural to express sizes and offsets in bytes.
18752 If an offset of an allocation needs to be aligned to a multiply of some number (e.g. 4 bytes), you can fill optional member
18753 VmaVirtualAllocationCreateInfo::alignment to request it. Example:
18754 
18755 \code
18756 VmaVirtualAllocationCreateInfo allocCreateInfo = {};
18757 allocCreateInfo.size = 4096; // 4 KB
18758 allocCreateInfo.alignment = 4; // Returned offset must be a multiply of 4 B
18759 
18760 VmaVirtualAllocation alloc;
18761 res = vmaVirtualAllocate(block, &allocCreateInfo, &alloc, nullptr);
18762 \endcode
18763 
18764 Alignments of different allocations made from one block may vary.
18765 However, if all alignments and sizes are always multiply of some size e.g. 4 B or `sizeof(MyDataStruct)`,
18766 you can express all sizes, alignments, and offsets in multiples of that size instead of individual bytes.
18767 It might be more convenient, but you need to make sure to use this new unit consistently in all the places:
18768 
18769 - VmaVirtualBlockCreateInfo::size
18770 - VmaVirtualAllocationCreateInfo::size and VmaVirtualAllocationCreateInfo::alignment
18771 - Using offset returned by vmaVirtualAllocate() or in VmaVirtualAllocationInfo::offset
18772 
18773 \section virtual_allocator_statistics Statistics
18774 
18775 You can obtain statistics of a virtual block using vmaGetVirtualBlockStatistics()
18776 (to get brief statistics that are fast to calculate)
18777 or vmaCalculateVirtualBlockStatistics() (to get more detailed statistics, slower to calculate).
18778 The functions fill structures #VmaStatistics, #VmaDetailedStatistics respectively - same as used by the normal Vulkan memory allocator.
18779 Example:
18780 
18781 \code
18782 VmaStatistics stats;
18783 vmaGetVirtualBlockStatistics(block, &stats);
18784 printf("My virtual block has %llu bytes used by %u virtual allocations\n",
18785     stats.allocationBytes, stats.allocationCount);
18786 \endcode
18787 
18788 You can also request a full list of allocations and free regions as a string in JSON format by calling
18789 vmaBuildVirtualBlockStatsString().
18790 Returned string must be later freed using vmaFreeVirtualBlockStatsString().
18791 The format of this string differs from the one returned by the main Vulkan allocator, but it is similar.
18792 
18793 \section virtual_allocator_additional_considerations Additional considerations
18794 
18795 The "virtual allocator" functionality is implemented on a level of individual memory blocks.
18796 Keeping track of a whole collection of blocks, allocating new ones when out of free space,
18797 deleting empty ones, and deciding which one to try first for a new allocation must be implemented by the user.
18798 
18799 Alternative allocation algorithms are supported, just like in custom pools of the real GPU memory.
18800 See enum #VmaVirtualBlockCreateFlagBits to learn how to specify them (e.g. #VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT).
18801 You can find their description in chapter \ref custom_memory_pools.
18802 Allocation strategies are also supported.
18803 See enum #VmaVirtualAllocationCreateFlagBits to learn how to specify them (e.g. #VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT).
18804 
18805 Following features are supported only by the allocator of the real GPU memory and not by virtual allocations:
18806 buffer-image granularity, `VMA_DEBUG_MARGIN`, `VMA_MIN_ALIGNMENT`.
18807 
18808 
18809 \page debugging_memory_usage Debugging incorrect memory usage
18810 
18811 If you suspect a bug with memory usage, like usage of uninitialized memory or
18812 memory being overwritten out of bounds of an allocation,
18813 you can use debug features of this library to verify this.
18814 
18815 \section debugging_memory_usage_initialization Memory initialization
18816 
18817 If you experience a bug with incorrect and nondeterministic data in your program and you suspect uninitialized memory to be used,
18818 you can enable automatic memory initialization to verify this.
18819 To do it, define macro `VMA_DEBUG_INITIALIZE_ALLOCATIONS` to 1.
18820 
18821 \code
18822 #define VMA_DEBUG_INITIALIZE_ALLOCATIONS 1
18823 #include "vk_mem_alloc.h"
18824 \endcode
18825 
18826 It makes memory of all new allocations initialized to bit pattern `0xDCDCDCDC`.
18827 Before an allocation is destroyed, its memory is filled with bit pattern `0xEFEFEFEF`.
18828 Memory is automatically mapped and unmapped if necessary.
18829 
18830 If you find these values while debugging your program, good chances are that you incorrectly
18831 read Vulkan memory that is allocated but not initialized, or already freed, respectively.
18832 
18833 Memory initialization works only with memory types that are `HOST_VISIBLE`.
18834 It works also with dedicated allocations.
18835 
18836 \section debugging_memory_usage_margins Margins
18837 
18838 By default, allocations are laid out in memory blocks next to each other if possible
18839 (considering required alignment, `bufferImageGranularity`, and `nonCoherentAtomSize`).
18840 
18841 ![Allocations without margin](../gfx/Margins_1.png)
18842 
18843 Define macro `VMA_DEBUG_MARGIN` to some non-zero value (e.g. 16) to enforce specified
18844 number of bytes as a margin after every allocation.
18845 
18846 \code
18847 #define VMA_DEBUG_MARGIN 16
18848 #include "vk_mem_alloc.h"
18849 \endcode
18850 
18851 ![Allocations with margin](../gfx/Margins_2.png)
18852 
18853 If your bug goes away after enabling margins, it means it may be caused by memory
18854 being overwritten outside of allocation boundaries. It is not 100% certain though.
18855 Change in application behavior may also be caused by different order and distribution
18856 of allocations across memory blocks after margins are applied.
18857 
18858 Margins work with all types of memory.
18859 
18860 Margin is applied only to allocations made out of memory blocks and not to dedicated
18861 allocations, which have their own memory block of specific size.
18862 It is thus not applied to allocations made using #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT flag
18863 or those automatically decided to put into dedicated allocations, e.g. due to its
18864 large size or recommended by VK_KHR_dedicated_allocation extension.
18865 
18866 Margins appear in [JSON dump](@ref statistics_json_dump) as part of free space.
18867 
18868 Note that enabling margins increases memory usage and fragmentation.
18869 
18870 Margins do not apply to \ref virtual_allocator.
18871 
18872 \section debugging_memory_usage_corruption_detection Corruption detection
18873 
18874 You can additionally define macro `VMA_DEBUG_DETECT_CORRUPTION` to 1 to enable validation
18875 of contents of the margins.
18876 
18877 \code
18878 #define VMA_DEBUG_MARGIN 16
18879 #define VMA_DEBUG_DETECT_CORRUPTION 1
18880 #include "vk_mem_alloc.h"
18881 \endcode
18882 
18883 When this feature is enabled, number of bytes specified as `VMA_DEBUG_MARGIN`
18884 (it must be multiply of 4) after every allocation is filled with a magic number.
18885 This idea is also know as "canary".
18886 Memory is automatically mapped and unmapped if necessary.
18887 
18888 This number is validated automatically when the allocation is destroyed.
18889 If it is not equal to the expected value, `VMA_ASSERT()` is executed.
18890 It clearly means that either CPU or GPU overwritten the memory outside of boundaries of the allocation,
18891 which indicates a serious bug.
18892 
18893 You can also explicitly request checking margins of all allocations in all memory blocks
18894 that belong to specified memory types by using function vmaCheckCorruption(),
18895 or in memory blocks that belong to specified custom pool, by using function
18896 vmaCheckPoolCorruption().
18897 
18898 Margin validation (corruption detection) works only for memory types that are
18899 `HOST_VISIBLE` and `HOST_COHERENT`.
18900 
18901 
18902 \page opengl_interop OpenGL Interop
18903 
18904 VMA provides some features that help with interoperability with OpenGL.
18905 
18906 \section opengl_interop_exporting_memory Exporting memory
18907 
18908 If you want to attach `VkExportMemoryAllocateInfoKHR` structure to `pNext` chain of memory allocations made by the library:
18909 
18910 It is recommended to create \ref custom_memory_pools for such allocations.
18911 Define and fill in your `VkExportMemoryAllocateInfoKHR` structure and attach it to VmaPoolCreateInfo::pMemoryAllocateNext
18912 while creating the custom pool.
18913 Please note that the structure must remain alive and unchanged for the whole lifetime of the #VmaPool,
18914 not only while creating it, as no copy of the structure is made,
18915 but its original pointer is used for each allocation instead.
18916 
18917 If you want to export all memory allocated by the library from certain memory types,
18918 also dedicated allocations or other allocations made from default pools,
18919 an alternative solution is to fill in VmaAllocatorCreateInfo::pTypeExternalMemoryHandleTypes.
18920 It should point to an array with `VkExternalMemoryHandleTypeFlagsKHR` to be automatically passed by the library
18921 through `VkExportMemoryAllocateInfoKHR` on each allocation made from a specific memory type.
18922 Please note that new versions of the library also support dedicated allocations created in custom pools.
18923 
18924 You should not mix these two methods in a way that allows to apply both to the same memory type.
18925 Otherwise, `VkExportMemoryAllocateInfoKHR` structure would be attached twice to the `pNext` chain of `VkMemoryAllocateInfo`.
18926 
18927 
18928 \section opengl_interop_custom_alignment Custom alignment
18929 
18930 Buffers or images exported to a different API like OpenGL may require a different alignment,
18931 higher than the one used by the library automatically, queried from functions like `vkGetBufferMemoryRequirements`.
18932 To impose such alignment:
18933 
18934 It is recommended to create \ref custom_memory_pools for such allocations.
18935 Set VmaPoolCreateInfo::minAllocationAlignment member to the minimum alignment required for each allocation
18936 to be made out of this pool.
18937 The alignment actually used will be the maximum of this member and the alignment returned for the specific buffer or image
18938 from a function like `vkGetBufferMemoryRequirements`, which is called by VMA automatically.
18939 
18940 If you want to create a buffer with a specific minimum alignment out of default pools,
18941 use special function vmaCreateBufferWithAlignment(), which takes additional parameter `minAlignment`.
18942 
18943 Note the problem of alignment affects only resources placed inside bigger `VkDeviceMemory` blocks and not dedicated
18944 allocations, as these, by definition, always have alignment = 0 because the resource is bound to the beginning of its dedicated block.
18945 Contrary to Direct3D 12, Vulkan doesn't have a concept of alignment of the entire memory block passed on its allocation.
18946 
18947 
18948 \page usage_patterns Recommended usage patterns
18949 
18950 Vulkan gives great flexibility in memory allocation.
18951 This chapter shows the most common patterns.
18952 
18953 See also slides from talk:
18954 [Sawicki, Adam. Advanced Graphics Techniques Tutorial: Memory management in Vulkan and DX12. Game Developers Conference, 2018](https://www.gdcvault.com/play/1025458/Advanced-Graphics-Techniques-Tutorial-New)
18955 
18956 
18957 \section usage_patterns_gpu_only GPU-only resource
18958 
18959 <b>When:</b>
18960 Any resources that you frequently write and read on GPU,
18961 e.g. images used as color attachments (aka "render targets"), depth-stencil attachments,
18962 images/buffers used as storage image/buffer (aka "Unordered Access View (UAV)").
18963 
18964 <b>What to do:</b>
18965 Let the library select the optimal memory type, which will likely have `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
18966 
18967 \code
18968 VkImageCreateInfo imgCreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
18969 imgCreateInfo.imageType = VK_IMAGE_TYPE_2D;
18970 imgCreateInfo.extent.width = 3840;
18971 imgCreateInfo.extent.height = 2160;
18972 imgCreateInfo.extent.depth = 1;
18973 imgCreateInfo.mipLevels = 1;
18974 imgCreateInfo.arrayLayers = 1;
18975 imgCreateInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
18976 imgCreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
18977 imgCreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
18978 imgCreateInfo.usage = VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
18979 imgCreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
18980 
18981 VmaAllocationCreateInfo allocCreateInfo = {};
18982 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
18983 allocCreateInfo.flags = VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
18984 allocCreateInfo.priority = 1.0f;
18985 
18986 VkImage img;
18987 VmaAllocation alloc;
18988 vmaCreateImage(allocator, &imgCreateInfo, &allocCreateInfo, &img, &alloc, nullptr);
18989 \endcode
18990 
18991 <b>Also consider:</b>
18992 Consider creating them as dedicated allocations using #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT,
18993 especially if they are large or if you plan to destroy and recreate them with different sizes
18994 e.g. when display resolution changes.
18995 Prefer to create such resources first and all other GPU resources (like textures and vertex buffers) later.
18996 When VK_EXT_memory_priority extension is enabled, it is also worth setting high priority to such allocation
18997 to decrease chances to be evicted to system memory by the operating system.
18998 
18999 \section usage_patterns_staging_copy_upload Staging copy for upload
19000 
19001 <b>When:</b>
19002 A "staging" buffer than you want to map and fill from CPU code, then use as a source od transfer
19003 to some GPU resource.
19004 
19005 <b>What to do:</b>
19006 Use flag #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT.
19007 Let the library select the optimal memory type, which will always have `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`.
19008 
19009 \code
19010 VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
19011 bufCreateInfo.size = 65536;
19012 bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
19013 
19014 VmaAllocationCreateInfo allocCreateInfo = {};
19015 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
19016 allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
19017     VMA_ALLOCATION_CREATE_MAPPED_BIT;
19018 
19019 VkBuffer buf;
19020 VmaAllocation alloc;
19021 VmaAllocationInfo allocInfo;
19022 vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
19023 
19024 ...
19025 
19026 memcpy(allocInfo.pMappedData, myData, myDataSize);
19027 \endcode
19028 
19029 <b>Also consider:</b>
19030 You can map the allocation using vmaMapMemory() or you can create it as persistenly mapped
19031 using #VMA_ALLOCATION_CREATE_MAPPED_BIT, as in the example above.
19032 
19033 
19034 \section usage_patterns_readback Readback
19035 
19036 <b>When:</b>
19037 Buffers for data written by or transferred from the GPU that you want to read back on the CPU,
19038 e.g. results of some computations.
19039 
19040 <b>What to do:</b>
19041 Use flag #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
19042 Let the library select the optimal memory type, which will always have `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`
19043 and `VK_MEMORY_PROPERTY_HOST_CACHED_BIT`.
19044 
19045 \code
19046 VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
19047 bufCreateInfo.size = 65536;
19048 bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_DST_BIT;
19049 
19050 VmaAllocationCreateInfo allocCreateInfo = {};
19051 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
19052 allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT |
19053     VMA_ALLOCATION_CREATE_MAPPED_BIT;
19054 
19055 VkBuffer buf;
19056 VmaAllocation alloc;
19057 VmaAllocationInfo allocInfo;
19058 vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
19059 
19060 ...
19061 
19062 const float* downloadedData = (const float*)allocInfo.pMappedData;
19063 \endcode
19064 
19065 
19066 \section usage_patterns_advanced_data_uploading Advanced data uploading
19067 
19068 For resources that you frequently write on CPU via mapped pointer and
19069 freqnently read on GPU e.g. as a uniform buffer (also called "dynamic"), multiple options are possible:
19070 
19071 -# Easiest solution is to have one copy of the resource in `HOST_VISIBLE` memory,
19072    even if it means system RAM (not `DEVICE_LOCAL`) on systems with a discrete graphics card,
19073    and make the device reach out to that resource directly.
19074    - Reads performed by the device will then go through PCI Express bus.
19075      The performace of this access may be limited, but it may be fine depending on the size
19076      of this resource (whether it is small enough to quickly end up in GPU cache) and the sparsity
19077      of access.
19078 -# On systems with unified memory (e.g. AMD APU or Intel integrated graphics, mobile chips),
19079    a memory type may be available that is both `HOST_VISIBLE` (available for mapping) and `DEVICE_LOCAL`
19080    (fast to access from the GPU). Then, it is likely the best choice for such type of resource.
19081 -# Systems with a discrete graphics card and separate video memory may or may not expose
19082    a memory type that is both `HOST_VISIBLE` and `DEVICE_LOCAL`, also known as Base Address Register (BAR).
19083    If they do, it represents a piece of VRAM (or entire VRAM, if ReBAR is enabled in the motherboard BIOS)
19084    that is available to CPU for mapping.
19085    - Writes performed by the host to that memory go through PCI Express bus.
19086      The performance of these writes may be limited, but it may be fine, especially on PCIe 4.0,
19087      as long as rules of using uncached and write-combined memory are followed - only sequential writes and no reads.
19088 -# Finally, you may need or prefer to create a separate copy of the resource in `DEVICE_LOCAL` memory,
19089    a separate "staging" copy in `HOST_VISIBLE` memory and perform an explicit transfer command between them.
19090 
19091 Thankfully, VMA offers an aid to create and use such resources in the the way optimal
19092 for the current Vulkan device. To help the library make the best choice,
19093 use flag #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT together with
19094 #VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT.
19095 It will then prefer a memory type that is both `DEVICE_LOCAL` and `HOST_VISIBLE` (integrated memory or BAR),
19096 but if no such memory type is available or allocation from it fails
19097 (PC graphics cards have only 256 MB of BAR by default, unless ReBAR is supported and enabled in BIOS),
19098 it will fall back to `DEVICE_LOCAL` memory for fast GPU access.
19099 It is then up to you to detect that the allocation ended up in a memory type that is not `HOST_VISIBLE`,
19100 so you need to create another "staging" allocation and perform explicit transfers.
19101 
19102 \code
19103 VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
19104 bufCreateInfo.size = 65536;
19105 bufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
19106 
19107 VmaAllocationCreateInfo allocCreateInfo = {};
19108 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
19109 allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
19110     VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT |
19111     VMA_ALLOCATION_CREATE_MAPPED_BIT;
19112 
19113 VkBuffer buf;
19114 VmaAllocation alloc;
19115 VmaAllocationInfo allocInfo;
19116 vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
19117 
19118 VkMemoryPropertyFlags memPropFlags;
19119 vmaGetAllocationMemoryProperties(allocator, alloc, &memPropFlags);
19120 
19121 if(memPropFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT)
19122 {
19123     // Allocation ended up in a mappable memory and is already mapped - write to it directly.
19124 
19125     // [Executed in runtime]:
19126     memcpy(allocInfo.pMappedData, myData, myDataSize);
19127 }
19128 else
19129 {
19130     // Allocation ended up in a non-mappable memory - need to transfer.
19131     VkBufferCreateInfo stagingBufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
19132     stagingBufCreateInfo.size = 65536;
19133     stagingBufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
19134 
19135     VmaAllocationCreateInfo stagingAllocCreateInfo = {};
19136     stagingAllocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
19137     stagingAllocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
19138         VMA_ALLOCATION_CREATE_MAPPED_BIT;
19139 
19140     VkBuffer stagingBuf;
19141     VmaAllocation stagingAlloc;
19142     VmaAllocationInfo stagingAllocInfo;
19143     vmaCreateBuffer(allocator, &stagingBufCreateInfo, &stagingAllocCreateInfo,
19144         &stagingBuf, &stagingAlloc, stagingAllocInfo);
19145 
19146     // [Executed in runtime]:
19147     memcpy(stagingAllocInfo.pMappedData, myData, myDataSize);
19148     //vkCmdPipelineBarrier: VK_ACCESS_HOST_WRITE_BIT --> VK_ACCESS_TRANSFER_READ_BIT
19149     VkBufferCopy bufCopy = {
19150         0, // srcOffset
19151         0, // dstOffset,
19152         myDataSize); // size
19153     vkCmdCopyBuffer(cmdBuf, stagingBuf, buf, 1, &bufCopy);
19154 }
19155 \endcode
19156 
19157 \section usage_patterns_other_use_cases Other use cases
19158 
19159 Here are some other, less obvious use cases and their recommended settings:
19160 
19161 - An image that is used only as transfer source and destination, but it should stay on the device,
19162   as it is used to temporarily store a copy of some texture, e.g. from the current to the next frame,
19163   for temporal antialiasing or other temporal effects.
19164   - Use `VkImageCreateInfo::usage = VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT`
19165   - Use VmaAllocationCreateInfo::usage = #VMA_MEMORY_USAGE_AUTO
19166 - An image that is used only as transfer source and destination, but it should be placed
19167   in the system RAM despite it doesn't need to be mapped, because it serves as a "swap" copy to evict
19168   least recently used textures from VRAM.
19169   - Use `VkImageCreateInfo::usage = VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT`
19170   - Use VmaAllocationCreateInfo::usage = #VMA_MEMORY_USAGE_AUTO_PREFER_HOST,
19171     as VMA needs a hint here to differentiate from the previous case.
19172 - A buffer that you want to map and write from the CPU, directly read from the GPU
19173   (e.g. as a uniform or vertex buffer), but you have a clear preference to place it in device or
19174   host memory due to its large size.
19175   - Use `VkBufferCreateInfo::usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT`
19176   - Use VmaAllocationCreateInfo::usage = #VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE or #VMA_MEMORY_USAGE_AUTO_PREFER_HOST
19177   - Use VmaAllocationCreateInfo::flags = #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT
19178 
19179 
19180 \page configuration Configuration
19181 
19182 Please check "CONFIGURATION SECTION" in the code to find macros that you can define
19183 before each include of this file or change directly in this file to provide
19184 your own implementation of basic facilities like assert, `min()` and `max()` functions,
19185 mutex, atomic etc.
19186 The library uses its own implementation of containers by default, but you can switch to using
19187 STL containers instead.
19188 
19189 For example, define `VMA_ASSERT(expr)` before including the library to provide
19190 custom implementation of the assertion, compatible with your project.
19191 By default it is defined to standard C `assert(expr)` in `_DEBUG` configuration
19192 and empty otherwise.
19193 
19194 \section config_Vulkan_functions Pointers to Vulkan functions
19195 
19196 There are multiple ways to import pointers to Vulkan functions in the library.
19197 In the simplest case you don't need to do anything.
19198 If the compilation or linking of your program or the initialization of the #VmaAllocator
19199 doesn't work for you, you can try to reconfigure it.
19200 
19201 First, the allocator tries to fetch pointers to Vulkan functions linked statically,
19202 like this:
19203 
19204 \code
19205 m_VulkanFunctions.vkAllocateMemory = (PFN_vkAllocateMemory)vkAllocateMemory;
19206 \endcode
19207 
19208 If you want to disable this feature, set configuration macro: `#define VMA_STATIC_VULKAN_FUNCTIONS 0`.
19209 
19210 Second, you can provide the pointers yourself by setting member VmaAllocatorCreateInfo::pVulkanFunctions.
19211 You can fetch them e.g. using functions `vkGetInstanceProcAddr` and `vkGetDeviceProcAddr` or
19212 by using a helper library like [volk](https://github.com/zeux/volk).
19213 
19214 Third, VMA tries to fetch remaining pointers that are still null by calling
19215 `vkGetInstanceProcAddr` and `vkGetDeviceProcAddr` on its own.
19216 You need to only fill in VmaVulkanFunctions::vkGetInstanceProcAddr and VmaVulkanFunctions::vkGetDeviceProcAddr.
19217 Other pointers will be fetched automatically.
19218 If you want to disable this feature, set configuration macro: `#define VMA_DYNAMIC_VULKAN_FUNCTIONS 0`.
19219 
19220 Finally, all the function pointers required by the library (considering selected
19221 Vulkan version and enabled extensions) are checked with `VMA_ASSERT` if they are not null.
19222 
19223 
19224 \section custom_memory_allocator Custom host memory allocator
19225 
19226 If you use custom allocator for CPU memory rather than default operator `new`
19227 and `delete` from C++, you can make this library using your allocator as well
19228 by filling optional member VmaAllocatorCreateInfo::pAllocationCallbacks. These
19229 functions will be passed to Vulkan, as well as used by the library itself to
19230 make any CPU-side allocations.
19231 
19232 \section allocation_callbacks Device memory allocation callbacks
19233 
19234 The library makes calls to `vkAllocateMemory()` and `vkFreeMemory()` internally.
19235 You can setup callbacks to be informed about these calls, e.g. for the purpose
19236 of gathering some statistics. To do it, fill optional member
19237 VmaAllocatorCreateInfo::pDeviceMemoryCallbacks.
19238 
19239 \section heap_memory_limit Device heap memory limit
19240 
19241 When device memory of certain heap runs out of free space, new allocations may
19242 fail (returning error code) or they may succeed, silently pushing some existing_
19243 memory blocks from GPU VRAM to system RAM (which degrades performance). This
19244 behavior is implementation-dependent - it depends on GPU vendor and graphics
19245 driver.
19246 
19247 On AMD cards it can be controlled while creating Vulkan device object by using
19248 VK_AMD_memory_overallocation_behavior extension, if available.
19249 
19250 Alternatively, if you want to test how your program behaves with limited amount of Vulkan device
19251 memory available without switching your graphics card to one that really has
19252 smaller VRAM, you can use a feature of this library intended for this purpose.
19253 To do it, fill optional member VmaAllocatorCreateInfo::pHeapSizeLimit.
19254 
19255 
19256 
19257 \page vk_khr_dedicated_allocation VK_KHR_dedicated_allocation
19258 
19259 VK_KHR_dedicated_allocation is a Vulkan extension which can be used to improve
19260 performance on some GPUs. It augments Vulkan API with possibility to query
19261 driver whether it prefers particular buffer or image to have its own, dedicated
19262 allocation (separate `VkDeviceMemory` block) for better efficiency - to be able
19263 to do some internal optimizations. The extension is supported by this library.
19264 It will be used automatically when enabled.
19265 
19266 It has been promoted to core Vulkan 1.1, so if you use eligible Vulkan version
19267 and inform VMA about it by setting VmaAllocatorCreateInfo::vulkanApiVersion,
19268 you are all set.
19269 
19270 Otherwise, if you want to use it as an extension:
19271 
19272 1 . When creating Vulkan device, check if following 2 device extensions are
19273 supported (call `vkEnumerateDeviceExtensionProperties()`).
19274 If yes, enable them (fill `VkDeviceCreateInfo::ppEnabledExtensionNames`).
19275 
19276 - VK_KHR_get_memory_requirements2
19277 - VK_KHR_dedicated_allocation
19278 
19279 If you enabled these extensions:
19280 
19281 2 . Use #VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT flag when creating
19282 your #VmaAllocator to inform the library that you enabled required extensions
19283 and you want the library to use them.
19284 
19285 \code
19286 allocatorInfo.flags |= VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT;
19287 
19288 vmaCreateAllocator(&allocatorInfo, &allocator);
19289 \endcode
19290 
19291 That is all. The extension will be automatically used whenever you create a
19292 buffer using vmaCreateBuffer() or image using vmaCreateImage().
19293 
19294 When using the extension together with Vulkan Validation Layer, you will receive
19295 warnings like this:
19296 
19297 _vkBindBufferMemory(): Binding memory to buffer 0x33 but vkGetBufferMemoryRequirements() has not been called on that buffer._
19298 
19299 It is OK, you should just ignore it. It happens because you use function
19300 `vkGetBufferMemoryRequirements2KHR()` instead of standard
19301 `vkGetBufferMemoryRequirements()`, while the validation layer seems to be
19302 unaware of it.
19303 
19304 To learn more about this extension, see:
19305 
19306 - [VK_KHR_dedicated_allocation in Vulkan specification](https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/chap50.html#VK_KHR_dedicated_allocation)
19307 - [VK_KHR_dedicated_allocation unofficial manual](http://asawicki.info/articles/VK_KHR_dedicated_allocation.php5)
19308 
19309 
19310 
19311 \page vk_ext_memory_priority VK_EXT_memory_priority
19312 
19313 VK_EXT_memory_priority is a device extension that allows to pass additional "priority"
19314 value to Vulkan memory allocations that the implementation may use prefer certain
19315 buffers and images that are critical for performance to stay in device-local memory
19316 in cases when the memory is over-subscribed, while some others may be moved to the system memory.
19317 
19318 VMA offers convenient usage of this extension.
19319 If you enable it, you can pass "priority" parameter when creating allocations or custom pools
19320 and the library automatically passes the value to Vulkan using this extension.
19321 
19322 If you want to use this extension in connection with VMA, follow these steps:
19323 
19324 \section vk_ext_memory_priority_initialization Initialization
19325 
19326 1) Call `vkEnumerateDeviceExtensionProperties` for the physical device.
19327 Check if the extension is supported - if returned array of `VkExtensionProperties` contains "VK_EXT_memory_priority".
19328 
19329 2) Call `vkGetPhysicalDeviceFeatures2` for the physical device instead of old `vkGetPhysicalDeviceFeatures`.
19330 Attach additional structure `VkPhysicalDeviceMemoryPriorityFeaturesEXT` to `VkPhysicalDeviceFeatures2::pNext` to be returned.
19331 Check if the device feature is really supported - check if `VkPhysicalDeviceMemoryPriorityFeaturesEXT::memoryPriority` is true.
19332 
19333 3) While creating device with `vkCreateDevice`, enable this extension - add "VK_EXT_memory_priority"
19334 to the list passed as `VkDeviceCreateInfo::ppEnabledExtensionNames`.
19335 
19336 4) While creating the device, also don't set `VkDeviceCreateInfo::pEnabledFeatures`.
19337 Fill in `VkPhysicalDeviceFeatures2` structure instead and pass it as `VkDeviceCreateInfo::pNext`.
19338 Enable this device feature - attach additional structure `VkPhysicalDeviceMemoryPriorityFeaturesEXT` to
19339 `VkPhysicalDeviceFeatures2::pNext` chain and set its member `memoryPriority` to `VK_TRUE`.
19340 
19341 5) While creating #VmaAllocator with vmaCreateAllocator() inform VMA that you
19342 have enabled this extension and feature - add #VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT
19343 to VmaAllocatorCreateInfo::flags.
19344 
19345 \section vk_ext_memory_priority_usage Usage
19346 
19347 When using this extension, you should initialize following member:
19348 
19349 - VmaAllocationCreateInfo::priority when creating a dedicated allocation with #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
19350 - VmaPoolCreateInfo::priority when creating a custom pool.
19351 
19352 It should be a floating-point value between `0.0f` and `1.0f`, where recommended default is `0.5f`.
19353 Memory allocated with higher value can be treated by the Vulkan implementation as higher priority
19354 and so it can have lower chances of being pushed out to system memory, experiencing degraded performance.
19355 
19356 It might be a good idea to create performance-critical resources like color-attachment or depth-stencil images
19357 as dedicated and set high priority to them. For example:
19358 
19359 \code
19360 VkImageCreateInfo imgCreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
19361 imgCreateInfo.imageType = VK_IMAGE_TYPE_2D;
19362 imgCreateInfo.extent.width = 3840;
19363 imgCreateInfo.extent.height = 2160;
19364 imgCreateInfo.extent.depth = 1;
19365 imgCreateInfo.mipLevels = 1;
19366 imgCreateInfo.arrayLayers = 1;
19367 imgCreateInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
19368 imgCreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
19369 imgCreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
19370 imgCreateInfo.usage = VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
19371 imgCreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
19372 
19373 VmaAllocationCreateInfo allocCreateInfo = {};
19374 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
19375 allocCreateInfo.flags = VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
19376 allocCreateInfo.priority = 1.0f;
19377 
19378 VkImage img;
19379 VmaAllocation alloc;
19380 vmaCreateImage(allocator, &imgCreateInfo, &allocCreateInfo, &img, &alloc, nullptr);
19381 \endcode
19382 
19383 `priority` member is ignored in the following situations:
19384 
19385 - Allocations created in custom pools: They inherit the priority, along with all other allocation parameters
19386   from the parametrs passed in #VmaPoolCreateInfo when the pool was created.
19387 - Allocations created in default pools: They inherit the priority from the parameters
19388   VMA used when creating default pools, which means `priority == 0.5f`.
19389 
19390 
19391 \page vk_amd_device_coherent_memory VK_AMD_device_coherent_memory
19392 
19393 VK_AMD_device_coherent_memory is a device extension that enables access to
19394 additional memory types with `VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD` and
19395 `VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD` flag. It is useful mostly for
19396 allocation of buffers intended for writing "breadcrumb markers" in between passes
19397 or draw calls, which in turn are useful for debugging GPU crash/hang/TDR cases.
19398 
19399 When the extension is available but has not been enabled, Vulkan physical device
19400 still exposes those memory types, but their usage is forbidden. VMA automatically
19401 takes care of that - it returns `VK_ERROR_FEATURE_NOT_PRESENT` when an attempt
19402 to allocate memory of such type is made.
19403 
19404 If you want to use this extension in connection with VMA, follow these steps:
19405 
19406 \section vk_amd_device_coherent_memory_initialization Initialization
19407 
19408 1) Call `vkEnumerateDeviceExtensionProperties` for the physical device.
19409 Check if the extension is supported - if returned array of `VkExtensionProperties` contains "VK_AMD_device_coherent_memory".
19410 
19411 2) Call `vkGetPhysicalDeviceFeatures2` for the physical device instead of old `vkGetPhysicalDeviceFeatures`.
19412 Attach additional structure `VkPhysicalDeviceCoherentMemoryFeaturesAMD` to `VkPhysicalDeviceFeatures2::pNext` to be returned.
19413 Check if the device feature is really supported - check if `VkPhysicalDeviceCoherentMemoryFeaturesAMD::deviceCoherentMemory` is true.
19414 
19415 3) While creating device with `vkCreateDevice`, enable this extension - add "VK_AMD_device_coherent_memory"
19416 to the list passed as `VkDeviceCreateInfo::ppEnabledExtensionNames`.
19417 
19418 4) While creating the device, also don't set `VkDeviceCreateInfo::pEnabledFeatures`.
19419 Fill in `VkPhysicalDeviceFeatures2` structure instead and pass it as `VkDeviceCreateInfo::pNext`.
19420 Enable this device feature - attach additional structure `VkPhysicalDeviceCoherentMemoryFeaturesAMD` to
19421 `VkPhysicalDeviceFeatures2::pNext` and set its member `deviceCoherentMemory` to `VK_TRUE`.
19422 
19423 5) While creating #VmaAllocator with vmaCreateAllocator() inform VMA that you
19424 have enabled this extension and feature - add #VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT
19425 to VmaAllocatorCreateInfo::flags.
19426 
19427 \section vk_amd_device_coherent_memory_usage Usage
19428 
19429 After following steps described above, you can create VMA allocations and custom pools
19430 out of the special `DEVICE_COHERENT` and `DEVICE_UNCACHED` memory types on eligible
19431 devices. There are multiple ways to do it, for example:
19432 
19433 - You can request or prefer to allocate out of such memory types by adding
19434   `VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD` to VmaAllocationCreateInfo::requiredFlags
19435   or VmaAllocationCreateInfo::preferredFlags. Those flags can be freely mixed with
19436   other ways of \ref choosing_memory_type, like setting VmaAllocationCreateInfo::usage.
19437 - If you manually found memory type index to use for this purpose, force allocation
19438   from this specific index by setting VmaAllocationCreateInfo::memoryTypeBits `= 1u << index`.
19439 
19440 \section vk_amd_device_coherent_memory_more_information More information
19441 
19442 To learn more about this extension, see [VK_AMD_device_coherent_memory in Vulkan specification](https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/VK_AMD_device_coherent_memory.html)
19443 
19444 Example use of this extension can be found in the code of the sample and test suite
19445 accompanying this library.
19446 
19447 
19448 \page enabling_buffer_device_address Enabling buffer device address
19449 
19450 Device extension VK_KHR_buffer_device_address
19451 allow to fetch raw GPU pointer to a buffer and pass it for usage in a shader code.
19452 It has been promoted to core Vulkan 1.2.
19453 
19454 If you want to use this feature in connection with VMA, follow these steps:
19455 
19456 \section enabling_buffer_device_address_initialization Initialization
19457 
19458 1) (For Vulkan version < 1.2) Call `vkEnumerateDeviceExtensionProperties` for the physical device.
19459 Check if the extension is supported - if returned array of `VkExtensionProperties` contains
19460 "VK_KHR_buffer_device_address".
19461 
19462 2) Call `vkGetPhysicalDeviceFeatures2` for the physical device instead of old `vkGetPhysicalDeviceFeatures`.
19463 Attach additional structure `VkPhysicalDeviceBufferDeviceAddressFeatures*` to `VkPhysicalDeviceFeatures2::pNext` to be returned.
19464 Check if the device feature is really supported - check if `VkPhysicalDeviceBufferDeviceAddressFeatures::bufferDeviceAddress` is true.
19465 
19466 3) (For Vulkan version < 1.2) While creating device with `vkCreateDevice`, enable this extension - add
19467 "VK_KHR_buffer_device_address" to the list passed as `VkDeviceCreateInfo::ppEnabledExtensionNames`.
19468 
19469 4) While creating the device, also don't set `VkDeviceCreateInfo::pEnabledFeatures`.
19470 Fill in `VkPhysicalDeviceFeatures2` structure instead and pass it as `VkDeviceCreateInfo::pNext`.
19471 Enable this device feature - attach additional structure `VkPhysicalDeviceBufferDeviceAddressFeatures*` to
19472 `VkPhysicalDeviceFeatures2::pNext` and set its member `bufferDeviceAddress` to `VK_TRUE`.
19473 
19474 5) While creating #VmaAllocator with vmaCreateAllocator() inform VMA that you
19475 have enabled this feature - add #VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT
19476 to VmaAllocatorCreateInfo::flags.
19477 
19478 \section enabling_buffer_device_address_usage Usage
19479 
19480 After following steps described above, you can create buffers with `VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT*` using VMA.
19481 The library automatically adds `VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT*` to
19482 allocated memory blocks wherever it might be needed.
19483 
19484 Please note that the library supports only `VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT*`.
19485 The second part of this functionality related to "capture and replay" is not supported,
19486 as it is intended for usage in debugging tools like RenderDoc, not in everyday Vulkan usage.
19487 
19488 \section enabling_buffer_device_address_more_information More information
19489 
19490 To learn more about this extension, see [VK_KHR_buffer_device_address in Vulkan specification](https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/chap46.html#VK_KHR_buffer_device_address)
19491 
19492 Example use of this extension can be found in the code of the sample and test suite
19493 accompanying this library.
19494 
19495 \page general_considerations General considerations
19496 
19497 \section general_considerations_thread_safety Thread safety
19498 
19499 - The library has no global state, so separate #VmaAllocator objects can be used
19500   independently.
19501   There should be no need to create multiple such objects though - one per `VkDevice` is enough.
19502 - By default, all calls to functions that take #VmaAllocator as first parameter
19503   are safe to call from multiple threads simultaneously because they are
19504   synchronized internally when needed.
19505   This includes allocation and deallocation from default memory pool, as well as custom #VmaPool.
19506 - When the allocator is created with #VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT
19507   flag, calls to functions that take such #VmaAllocator object must be
19508   synchronized externally.
19509 - Access to a #VmaAllocation object must be externally synchronized. For example,
19510   you must not call vmaGetAllocationInfo() and vmaMapMemory() from different
19511   threads at the same time if you pass the same #VmaAllocation object to these
19512   functions.
19513 - #VmaVirtualBlock is not safe to be used from multiple threads simultaneously.
19514 
19515 \section general_considerations_versioning_and_compatibility Versioning and compatibility
19516 
19517 The library uses [**Semantic Versioning**](https://semver.org/),
19518 which means version numbers follow convention: Major.Minor.Patch (e.g. 2.3.0), where:
19519 
19520 - Incremented Patch version means a release is backward- and forward-compatible,
19521   introducing only some internal improvements, bug fixes, optimizations etc.
19522   or changes that are out of scope of the official API described in this documentation.
19523 - Incremented Minor version means a release is backward-compatible,
19524   so existing code that uses the library should continue to work, while some new
19525   symbols could have been added: new structures, functions, new values in existing
19526   enums and bit flags, new structure members, but not new function parameters.
19527 - Incrementing Major version means a release could break some backward compatibility.
19528 
19529 All changes between official releases are documented in file "CHANGELOG.md".
19530 
19531 \warning Backward compatiblity is considered on the level of C++ source code, not binary linkage.
19532 Adding new members to existing structures is treated as backward compatible if initializing
19533 the new members to binary zero results in the old behavior.
19534 You should always fully initialize all library structures to zeros and not rely on their
19535 exact binary size.
19536 
19537 \section general_considerations_validation_layer_warnings Validation layer warnings
19538 
19539 When using this library, you can meet following types of warnings issued by
19540 Vulkan validation layer. They don't necessarily indicate a bug, so you may need
19541 to just ignore them.
19542 
19543 - *vkBindBufferMemory(): Binding memory to buffer 0xeb8e4 but vkGetBufferMemoryRequirements() has not been called on that buffer.*
19544   - It happens when VK_KHR_dedicated_allocation extension is enabled.
19545     `vkGetBufferMemoryRequirements2KHR` function is used instead, while validation layer seems to be unaware of it.
19546 - *Mapping an image with layout VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL can result in undefined behavior if this memory is used by the device. Only GENERAL or PREINITIALIZED should be used.*
19547   - It happens when you map a buffer or image, because the library maps entire
19548     `VkDeviceMemory` block, where different types of images and buffers may end
19549     up together, especially on GPUs with unified memory like Intel.
19550 - *Non-linear image 0xebc91 is aliased with linear buffer 0xeb8e4 which may indicate a bug.*
19551   - It may happen when you use [defragmentation](@ref defragmentation).
19552 
19553 \section general_considerations_allocation_algorithm Allocation algorithm
19554 
19555 The library uses following algorithm for allocation, in order:
19556 
19557 -# Try to find free range of memory in existing blocks.
19558 -# If failed, try to create a new block of `VkDeviceMemory`, with preferred block size.
19559 -# If failed, try to create such block with size / 2, size / 4, size / 8.
19560 -# If failed, try to allocate separate `VkDeviceMemory` for this allocation,
19561    just like when you use #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
19562 -# If failed, choose other memory type that meets the requirements specified in
19563    VmaAllocationCreateInfo and go to point 1.
19564 -# If failed, return `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
19565 
19566 \section general_considerations_features_not_supported Features not supported
19567 
19568 Features deliberately excluded from the scope of this library:
19569 
19570 -# **Data transfer.** Uploading (streaming) and downloading data of buffers and images
19571    between CPU and GPU memory and related synchronization is responsibility of the user.
19572    Defining some "texture" object that would automatically stream its data from a
19573    staging copy in CPU memory to GPU memory would rather be a feature of another,
19574    higher-level library implemented on top of VMA.
19575    VMA doesn't record any commands to a `VkCommandBuffer`. It just allocates memory.
19576 -# **Recreation of buffers and images.** Although the library has functions for
19577    buffer and image creation: vmaCreateBuffer(), vmaCreateImage(), you need to
19578    recreate these objects yourself after defragmentation. That is because the big
19579    structures `VkBufferCreateInfo`, `VkImageCreateInfo` are not stored in
19580    #VmaAllocation object.
19581 -# **Handling CPU memory allocation failures.** When dynamically creating small C++
19582    objects in CPU memory (not Vulkan memory), allocation failures are not checked
19583    and handled gracefully, because that would complicate code significantly and
19584    is usually not needed in desktop PC applications anyway.
19585    Success of an allocation is just checked with an assert.
19586 -# **Code free of any compiler warnings.** Maintaining the library to compile and
19587    work correctly on so many different platforms is hard enough. Being free of
19588    any warnings, on any version of any compiler, is simply not feasible.
19589    There are many preprocessor macros that make some variables unused, function parameters unreferenced,
19590    or conditional expressions constant in some configurations.
19591    The code of this library should not be bigger or more complicated just to silence these warnings.
19592    It is recommended to disable such warnings instead.
19593 -# This is a C++ library with C interface. **Bindings or ports to any other programming languages** are welcome as external projects but
19594    are not going to be included into this repository.
19595 */
19596