• Home
  • Raw
  • Download

Lines Matching full:allocations

99 /* Pool usage% threshold when currently covered allocations are skipped. */
136 * to gate allocations, to avoid a load and compare if KFENCE is disabled.
145 * allocations of the same source filling up the pool.
147 * Assuming a range of 15%-85% unique allocations in the pool at any point in
184 [KFENCE_COUNTER_ALLOCS] = "total allocations",
186 [KFENCE_COUNTER_ZOMBIES] = "zombie allocations",
188 [KFENCE_COUNTER_SKIP_INCOMPAT] = "skipped allocations (incompatible)",
189 [KFENCE_COUNTER_SKIP_CAPACITY] = "skipped allocations (capacity)",
190 [KFENCE_COUNTER_SKIP_COVERED] = "skipped allocations (covered)",
439 * Note: for allocations made before RNG initialization, will always in kfence_guarded_alloc()
442 * KFENCE to detect bugs due to earlier allocations. The only downside in kfence_guarded_alloc()
444 * such allocations. in kfence_guarded_alloc()
675 * otherwise overlap with allocations returned by kfence_alloc(), which in kfence_init_pool_early()
820 * avoids IPIs, at the cost of not immediately capturing allocations if the
1017 * This cache still has allocations, and we should not in kfence_shutdown_cache()
1020 * behaviour of keeping the allocations alive (leak the in kfence_shutdown_cache()
1022 * allocations" as the KFENCE objects are the only ones in kfence_shutdown_cache()
1064 * Skip allocations from non-default zones, including DMA. We cannot in __kfence_alloc()
1076 * Skip allocations for this slab, if KFENCE has been disabled for in __kfence_alloc()
1091 * Calling wake_up() here may deadlock when allocations happen in __kfence_alloc()
1109 * full, including avoiding long-lived allocations of the same source in __kfence_alloc()
1110 * filling up the pool (e.g. pagecache allocations). in __kfence_alloc()