Home
last modified time | relevance | path

Searched full:reclaim (Results 1 – 25 of 865) sorted by relevance

12345678910>>...35

/kernel/linux/linux-6.6/tools/testing/selftests/cgroup/
Dmemcg_protection.m5 % This script simulates reclaim protection behavior on a single level of memcg
10 % reclaim) and then the reclaim starts, all memory is reclaimable, i.e. treated
11 % same. It simulates only non-low reclaim and assumes all memory.min = 0.
24 % Reclaim parameters
27 % Minimal reclaim amount (GB)
30 % Reclaim coefficient (think as 0.5^sc->priority)
72 % nothing to reclaim, reached equilibrium
79 % XXX here I do parallel reclaim of all siblings
80 % in reality reclaim is serialized and each sibling recalculates own residual
/kernel/linux/linux-5.10/drivers/md/
Ddm-zoned-reclaim.c12 #define DM_MSG_PREFIX "zoned reclaim"
33 * Reclaim state flags.
45 * Percentage of unmapped (free) random zones below which reclaim starts
51 * Percentage of unmapped (free) random zones above which reclaim will
338 * Reclaim an empty zone.
362 * Find a candidate zone for reclaim and process it.
376 DMDEBUG("(%s/%u): No zone found to reclaim", in dmz_do_reclaim()
390 * Reclaim the random data zone by moving its in dmz_do_reclaim()
412 * Reclaim the data zone by merging it into the in dmz_do_reclaim()
422 DMDEBUG("(%s/%u): reclaim zone %u interrupted", in dmz_do_reclaim()
[all …]
/kernel/linux/linux-6.6/drivers/md/
Ddm-zoned-reclaim.c12 #define DM_MSG_PREFIX "zoned reclaim"
33 * Reclaim state flags.
45 * Percentage of unmapped (free) random zones below which reclaim starts
51 * Percentage of unmapped (free) random zones above which reclaim will
338 * Reclaim an empty zone.
362 * Find a candidate zone for reclaim and process it.
376 DMDEBUG("(%s/%u): No zone found to reclaim", in dmz_do_reclaim()
390 * Reclaim the random data zone by moving its in dmz_do_reclaim()
412 * Reclaim the data zone by merging it into the in dmz_do_reclaim()
422 DMDEBUG("(%s/%u): reclaim zone %u interrupted", in dmz_do_reclaim()
[all …]
/kernel/linux/linux-6.6/Documentation/core-api/
Dmemory-allocation.rst43 direct reclaim may be triggered under memory pressure; the calling
46 handler, use ``GFP_NOWAIT``. This flag prevents direct reclaim and
74 prevent recursion deadlocks caused by direct memory reclaim calling
87 GFP flags and reclaim behavior
89 Memory allocations may trigger direct or background reclaim and it is
95 doesn't kick the background reclaim. Should be used carefully because it
97 reclaim.
101 context but can wake kswapd to reclaim memory if the zone is below
111 * ``GFP_KERNEL`` - both background and direct reclaim are allowed and the
119 reclaim (one round of reclaim in this implementation). The OOM killer
[all …]
Dgfp_mask-from-fs-io.rst15 memory reclaim calling back into the FS or IO paths and blocking on
25 of GFP_NOFS/GFP_NOIO can lead to memory over-reclaim or other memory
26 reclaim issues.
44 any critical section with respect to the reclaim is started - e.g.
45 lock shared with the reclaim context or when a transaction context
46 nesting would be possible via reclaim. The restore function should be
48 explanation what is the reclaim context for easier maintenance.
/kernel/linux/linux-5.10/Documentation/core-api/
Dmemory-allocation.rst43 direct reclaim may be triggered under memory pressure; the calling
46 handler, use ``GFP_NOWAIT``. This flag prevents direct reclaim and
74 prevent recursion deadlocks caused by direct memory reclaim calling
87 GFP flags and reclaim behavior
89 Memory allocations may trigger direct or background reclaim and it is
95 doesn't kick the background reclaim. Should be used carefully because it
97 reclaim.
101 context but can wake kswapd to reclaim memory if the zone is below
111 * ``GFP_KERNEL`` - both background and direct reclaim are allowed and the
119 reclaim (one round of reclaim in this implementation). The OOM killer
[all …]
Dgfp_mask-from-fs-io.rst15 memory reclaim calling back into the FS or IO paths and blocking on
25 of GFP_NOFS/GFP_NOIO can lead to memory over-reclaim or other memory
26 reclaim issues.
44 any critical section with respect to the reclaim is started - e.g.
45 lock shared with the reclaim context or when a transaction context
46 nesting would be possible via reclaim. The restore function should be
48 explanation what is the reclaim context for easier maintenance.
/kernel/linux/linux-5.10/mm/
Dvmscan.c204 * As the data only determines if reclaim or compaction continues, it is
219 * This prevents zones like DMA32 from being ignored in reclaim in zone_reclaimable_pages()
430 * we will try to reclaim all available objects, otherwise we can end in do_shrink_slab()
438 * scanning at high prio and therefore should try to reclaim as much as in do_shrink_slab()
567 * @priority: the reclaim priority
859 * inode reclaim needs to empty out the radix tree or in __remove_mapping()
979 /* Reclaim if clean, defer dirty pages to writeback */ in page_check_references()
994 * from reclaim context. Do not stall reclaim based on them in page_check_dirty_writeback()
1080 * pages marked for immediate reclaim are making it to the in shrink_page_list()
1093 * 1) If reclaim is encountering an excessive number of pages in shrink_page_list()
[all …]
Dmemcg_reclaim.c98 * do not reclaim anything from the anonymous working set right now. in get_scan_count_hyperhold()
162 * their relative recent reclaim efficiency. in get_scan_count_hyperhold()
268 * aren't eligible for reclaim - either because they in shrink_anon()
405 * thrashing, try to reclaim those first before touching in shrink_node_hyperhold()
447 * runaway file reclaim problem, but rather just in shrink_node_hyperhold()
448 * extreme pressure. Reclaim as per usual then. in shrink_node_hyperhold()
473 * If reclaim is isolating dirty pages under writeback, in shrink_node_hyperhold()
479 * device. The only option is to throttle from reclaim in shrink_node_hyperhold()
486 * immediate reclaim and stall if any are encountered in shrink_node_hyperhold()
492 /* Allow kswapd to start writing pages during reclaim. */ in shrink_node_hyperhold()
[all …]
/kernel/linux/linux-6.6/include/linux/
Dgfp_types.h140 * DOC: Reclaim modifiers
142 * Reclaim modifiers
153 * %__GFP_DIRECT_RECLAIM indicates that the caller may enter direct reclaim.
158 * the low watermark is reached and have it reclaim pages until the high
160 * options are available and the reclaim is likely to disrupt the system. The
162 * reclaim/compaction may cause indirect stalls.
164 * %__GFP_RECLAIM is shorthand to allow/forbid both direct and kswapd reclaim.
176 * memory direct reclaim to get some memory under memory pressure (thus
182 * %__GFP_RETRY_MAYFAIL: The VM implementation will retry memory reclaim
212 #define __GFP_DIRECT_RECLAIM ((__force gfp_t)___GFP_DIRECT_RECLAIM) /* Caller can reclaim */
[all …]
/kernel/liteos_a/kernel/base/vm/
Doom.c80 * TryShrinkMemory maybe reclaim 0 pages in the first time from active list in OomForceShrinkMemory()
81 * to inactive list, and in the second time reclaim memory from inactive list. in OomForceShrinkMemory()
103 * we do force memory reclaim from page cache here. in OomReclaimPageCache()
104 * if we get memory, we will reclaim pagecache memory again. in OomReclaimPageCache()
105 * if there is no memory to reclaim, we will return. in OomReclaimPageCache()
134 /* first we will check if we need to reclaim pagecache memory */ in OomCheckProcess()
169 " oom reclaim memory threshold: %#x(byte)\n" in OomInfodump()
198 PRINTK("[oom] reclaim memory threshold %#x(byte) invalid," in OomSetReclaimMemThreashold()
203 PRINTK("[oom] set oom reclaim memory threshold %#x(byte) successful\n", in OomSetReclaimMemThreashold()
/kernel/linux/linux-6.6/Documentation/mm/
Dmultigen_lru.rst7 page reclaim and improves performance under memory pressure. Page
8 reclaim decides the kernel's caching policy and ability to overcommit
110 eviction. They form a closed-loop system, i.e., the page reclaim.
174 ignored when the current memcg is under reclaim. Similarly, page table
175 walkers will ignore pages from nodes other than the one under reclaim.
187 can incur the highest CPU cost in the reclaim path.
228 global reclaim, which is critical to system-wide memory overcommit in
229 data centers. Note that memcg LRU only applies to global reclaim.
241 In terms of global reclaim, it has two distinct features:
245 2. Eventual fairness, which allows direct reclaim to bail out at will
[all …]
/kernel/linux/linux-6.6/Documentation/admin-guide/device-mapper/
Ddm-zoned.rst27 internally for storing metadata and performing reclaim operations.
108 situation, a reclaim process regularly scans used conventional zones and
109 tries to reclaim the least recently used zones by copying the valid
128 (for both incoming BIO processing and reclaim process) and all dirty
184 Normally the reclaim process will be started once there are less than 50
185 percent free random zones. In order to start the reclaim process manually
191 dmsetup message /dev/dm-X 0 reclaim
193 will start the reclaim process and random zones will be moved to sequential
/kernel/linux/linux-5.10/Documentation/admin-guide/device-mapper/
Ddm-zoned.rst27 internally for storing metadata and performaing reclaim operations.
108 situation, a reclaim process regularly scans used conventional zones and
109 tries to reclaim the least recently used zones by copying the valid
128 (for both incoming BIO processing and reclaim process) and all dirty
184 Normally the reclaim process will be started once there are less than 50
185 percent free random zones. In order to start the reclaim process manually
191 dmsetup message /dev/dm-X 0 reclaim
193 will start the reclaim process and random zones will be moved to sequential
/kernel/linux/linux-5.10/include/linux/
Dgfp.h107 * %__GFP_ATOMIC indicates that the caller cannot reclaim or sleep and is
130 * DOC: Reclaim modifiers
132 * Reclaim modifiers
143 * %__GFP_DIRECT_RECLAIM indicates that the caller may enter direct reclaim.
148 * the low watermark is reached and have it reclaim pages until the high
150 * options are available and the reclaim is likely to disrupt the system. The
152 * reclaim/compaction may cause indirect stalls.
154 * %__GFP_RECLAIM is shorthand to allow/forbid both direct and kswapd reclaim.
166 * memory direct reclaim to get some memory under memory pressure (thus
172 * %__GFP_RETRY_MAYFAIL: The VM implementation will retry memory reclaim
[all …]
Dcompaction.h7 * Lower value means higher priority, analogically to reclaim priority.
25 * compaction didn't start as it was not possible or direct reclaim
131 /* Compaction needs reclaim to be performed first, so it can continue. */
136 * so the regular reclaim has to try harder and reclaim something. in compaction_needs_reclaim()
156 * instead of entering direct reclaim. in compaction_withdrawn()
/kernel/linux/linux-6.6/mm/
Dvmscan.c75 /* How many pages shrink_list() should reclaim */
86 * primary target of this reclaim invocation.
96 /* Can active folios be deactivated as part of reclaim? */
109 /* Can folios be swapped as part of reclaim? */
112 /* Proactive reclaim invoked by userspace through memory.reclaim */
146 /* The highest zone to isolate folios for reclaim from */
432 /* Returns true for reclaim through cgroup limits or cgroup interfaces. */
439 * Returns true for reclaim on the root cgroup. This is true for direct
440 * allocator reclaim and reclaim through cgroup interfaces on the root cgroup.
521 * flush_reclaim_state(): add pages reclaimed outside of LRU-based reclaim to
[all …]
/kernel/linux/linux-6.6/drivers/gpu/drm/amd/amdgpu/
Damdgpu_mes.h401 * A bit more detail about why to set no-FS reclaim with MES lock:
418 * notifiers can be called in reclaim-FS context. That's where the
420 * memory pressure. While we are running in reclaim-FS context, we must
421 * not trigger another memory reclaim operation because that would
422 * recursively reenter the reclaim code and cause a deadlock. The
428 * Thread A: takes and holds reservation lock | triggers reclaim-FS |
433 * triggering a reclaim-FS operation itself.
441 * As a result, make sure no reclaim-FS happens while holding this lock anywhere
442 * to prevent deadlocks when an MMU notifier runs in reclaim-FS context.
/kernel/linux/linux-5.10/fs/xfs/
Dxfs_icache.c143 * Queue background inode reclaim work if there are reclaimable inodes and there
144 * isn't reclaim work already scheduled or in progress.
169 /* propagate the reclaim tag up into the perag radix tree */ in xfs_perag_set_reclaim_tag()
175 /* schedule periodic background inode reclaim */ in xfs_perag_set_reclaim_tag()
191 /* clear the reclaim tag from the perag radix tree */ in xfs_perag_clear_reclaim_tag()
412 * trouble. Try to re-add it to the reclaim list. in xfs_iget_cache_hit()
720 * lookup reduction and stack usage. This is in the reclaim path, so we can't
745 /* avoid new or reclaimable inodes. Leave for reclaim code to flush */ in xfs_inode_walk_ag_grab()
755 /* If we can't grab the inode, it must on it's way to reclaim. */ in xfs_inode_walk_ag_grab()
983 * Grab the inode for reclaim exclusively.
[all …]
/kernel/linux/linux-6.6/Documentation/ABI/testing/
Dsysfs-kernel-mm-numa9 Description: Enable/disable demoting pages during reclaim
11 Page migration during reclaim is intended for systems
16 Allowing page migration during reclaim enables these
/kernel/linux/linux-6.6/Documentation/trace/postprocess/
Dtrace-vmscan-postprocess.pl3 # page reclaim. It makes an attempt to extract some high-level information on
325 # Record how long direct reclaim took this time
482 printf("Reclaim latencies expressed as order-latency_in_ms\n") if !$opt_ignorepid;
638 print "Direct reclaim pages scanned: $total_direct_nr_scanned\n";
639 print "Direct reclaim file pages scanned: $total_direct_nr_file_scanned\n";
640 print "Direct reclaim anon pages scanned: $total_direct_nr_anon_scanned\n";
641 print "Direct reclaim pages reclaimed: $total_direct_nr_reclaimed\n";
642 print "Direct reclaim file pages reclaimed: $total_direct_nr_file_reclaimed\n";
643 print "Direct reclaim anon pages reclaimed: $total_direct_nr_anon_reclaimed\n";
644 print "Direct reclaim write file sync I/O: $total_direct_writepage_file_sync\n";
[all …]
/kernel/linux/linux-5.10/Documentation/trace/postprocess/
Dtrace-vmscan-postprocess.pl3 # page reclaim. It makes an attempt to extract some high-level information on
325 # Record how long direct reclaim took this time
482 printf("Reclaim latencies expressed as order-latency_in_ms\n") if !$opt_ignorepid;
638 print "Direct reclaim pages scanned: $total_direct_nr_scanned\n";
639 print "Direct reclaim file pages scanned: $total_direct_nr_file_scanned\n";
640 print "Direct reclaim anon pages scanned: $total_direct_nr_anon_scanned\n";
641 print "Direct reclaim pages reclaimed: $total_direct_nr_reclaimed\n";
642 print "Direct reclaim file pages reclaimed: $total_direct_nr_file_reclaimed\n";
643 print "Direct reclaim anon pages reclaimed: $total_direct_nr_anon_reclaimed\n";
644 print "Direct reclaim write file sync I/O: $total_direct_writepage_file_sync\n";
[all …]
/kernel/linux/linux-6.6/fs/xfs/
Dxfs_icache.c185 * Queue background inode reclaim work if there are reclaimable inodes and there
186 * isn't reclaim work already scheduled or in progress.
273 * Reclaim can signal (with a null agino) that it cleared its own tag in xfs_perag_clear_inode_tag()
350 * the actual reclaim workers from stomping over us while we recycle in xfs_iget_recycle()
365 * trouble. Try to re-add it to the reclaim list. in xfs_iget_recycle()
806 * Grab the inode for reclaim exclusively.
813 * avoid inodes that are no longer reclaim candidates.
817 * ensured that we are able to reclaim this inode and the world can see that we
818 * are going to reclaim it.
832 /* not a reclaim candidate. */ in xfs_reclaim_igrab()
[all …]
/kernel/linux/linux-5.10/Documentation/admin-guide/sysctl/
Dvm.rst272 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in
485 A percentage of the total pages in each zone. On Zone reclaim
489 systems that rarely perform global reclaim.
493 Note that slab reclaim is triggered in a per zone / node fashion.
503 This is a percentage of the total pages in each zone. Zone reclaim will
897 This percentage value controls the tendency of the kernel to reclaim
901 reclaim dentries and inodes at a "fair" rate with respect to pagecache and
902 swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer
904 never reclaim dentries and inodes due to memory pressure and this can easily
906 causes the kernel to prefer to reclaim dentries and inodes.
[all …]
/kernel/linux/linux-6.6/Documentation/admin-guide/mm/
Dmultigen_lru.rst7 page reclaim and improves performance under memory pressure. Page
8 reclaim decides the kernel's caching policy and ability to overcommit
138 Proactive reclaim
140 Proactive reclaim induces page reclaim when there is no memory
142 comes in, the job scheduler wants to proactively reclaim cold pages on

12345678910>>...35