Searched full:invalidations (Results 1 – 25 of 134) sorted by relevance
123456
110 /* Serialize global tlb invalidations */114 * Batch TLB invalidations119 * so we track how many TLB invalidations have been
105 * invalidations so it is good to avoid paying the forcewake cost and in mmio_invalidate_full()
41 You may be doing too many individual invalidations if you see the43 profiles. If you believe that individual invalidations being
138 * in order to force TLB invalidations to be global as to in mm_context_add_copro()162 * for the time being. Invalidations will remain global if in mm_context_remove_copro()164 * it could make some invalidations local with no flush in mm_context_remove_copro()
77 /* Enable use of broadcast TLB invalidations. We don't always set it79 * use of such invalidations
148 * in order to force TLB invalidations to be global as to in mm_context_add_copro()172 * for the time being. Invalidations will remain global if in mm_context_remove_copro()174 * it could make some invalidations local with no flush in mm_context_remove_copro()
68 /* Enable use of broadcast TLB invalidations. We don't always set it70 * use of such invalidations
43 * This can typically be used for things like IPI for tlb invalidations
38 * Broadcast I-cache block invalidations by default. in shx3_cache_init()
29 * invalidations need to be broadcasted to all other cpu in the system in
235 u64 invalidations = 0; in mlx5_ib_invalidate_range() local260 * overwrite the same MTTs. Concurent invalidations might race us, in mlx5_ib_invalidate_range()286 /* Count page invalidations */ in mlx5_ib_invalidate_range()287 invalidations += idx - blk_start_idx + 1; in mlx5_ib_invalidate_range()296 /* Count page invalidations */ in mlx5_ib_invalidate_range()297 invalidations += idx - blk_start_idx + 1; in mlx5_ib_invalidate_range()300 mlx5_update_odp_stats(mr, invalidations, invalidations); in mlx5_ib_invalidate_range()
100 atomic64_read(&mr->odp_stats.invalidations))) in fill_stat_mr_entry()
12 …h return data even if the snoops cause an invalidation. L2 cache line invalidations which do not w…
12 …nce operations. The following cache operations are not counted:\n\n1. Invalidations which do not r…
94 * flushed/invalidated. As we always have to emit invalidations in i915_gem_clflush_object()
80 /* read TID cache invalidations */
100 * flushed/invalidated. As we always have to emit invalidations in i915_gem_clflush_object()
162 * of buffer invalidations to 2048.389 * buffer invalidations, so we need to return early so that we can in xreap_agextent_iter()