• Home
  • Raw
  • Download

Lines Matching full:reclaim

75 	/* How many pages shrink_list() should reclaim */
86 * primary target of this reclaim invocation.
96 /* Can active folios be deactivated as part of reclaim? */
109 /* Can folios be swapped as part of reclaim? */
112 /* Proactive reclaim invoked by userspace through memory.reclaim */
146 /* The highest zone to isolate folios for reclaim from */
432 /* Returns true for reclaim through cgroup limits or cgroup interfaces. */
439 * Returns true for reclaim on the root cgroup. This is true for direct
440 * allocator reclaim and reclaim through cgroup interfaces on the root cgroup.
521 * flush_reclaim_state(): add pages reclaimed outside of LRU-based reclaim to
539 * Only count such pages for global reclaim to prevent under-reclaiming in flush_reclaim_state()
541 * charging and false positives from proactive reclaim. in flush_reclaim_state()
551 * memcg reclaim, to make reporting more accurate and reduce in flush_reclaim_state()
611 * For non-memcg reclaim, is there in can_reclaim_anon_pages()
632 * As the data only determines if reclaim or compaction continues, it is
882 * we will try to reclaim all available objects, otherwise we can end in do_shrink_slab()
890 * scanning at high prio and therefore should try to reclaim as much as in do_shrink_slab()
1018 * @priority: the reclaim priority
1202 * workqueues. They may be required for reclaim to make in reclaim_throttle()
1444 * inode reclaim needs to empty out the radix tree or in __remove_mapping()
1574 /* Reclaim if clean, defer dirty folios to writeback */ in folio_check_references()
1589 * from reclaim context. Do not stall reclaim based on them. in folio_check_dirty_writeback()
1626 * demote or reclaim pages from the target node via kswapd if we are in alloc_demote_folio()
1770 * for immediate reclaim are making it to the end of in shrink_folio_list()
1780 * 1) If reclaim is encountering an excessive number in shrink_folio_list()
1782 * the writeback and reclaim flags set, then it in shrink_folio_list()
1792 * 2) Global or new memcg reclaim encounters a folio that is in shrink_folio_list()
1793 * not marked for immediate reclaim, or the caller does not in shrink_folio_list()
1796 * reclaim and continue scanning. in shrink_folio_list()
1800 * enter reclaim, and deadlock if it waits on a folio for in shrink_folio_list()
1806 * reclaim flag set. memcg does not have any dirty folio in shrink_folio_list()
1809 * reclaim. Wait for the writeback to complete. in shrink_folio_list()
1816 * Since they're marked for immediate reclaim, they won't put in shrink_folio_list()
1835 * just cleared the reclaim flag, then in shrink_folio_list()
1836 * setting the reclaim flag here ends up in shrink_folio_list()
1840 * have the reclaim flag set next time in shrink_folio_list()
1841 * memcg reclaim reaches the tests above, in shrink_folio_list()
1844 * in global reclaim. in shrink_folio_list()
1871 ; /* try to reclaim the folio below */ in shrink_folio_list()
1964 * No point in trying to reclaim folio if it is pinned. in shrink_folio_list()
1965 * Furthermore we don't want to reclaim underlying fs metadata in shrink_folio_list()
1982 * the same dirty folios again (with the reclaim in shrink_folio_list()
1990 * Immediately reclaim when written back. in shrink_folio_list()
2030 * ahead and try to reclaim the folio. in shrink_folio_list()
2134 /* Not a candidate for swapping, so reclaim swap space. */ in shrink_folio_list()
2162 * goto retry to reclaim the undemoted folios in folio_list if in shrink_folio_list()
2170 * However, disabling reclaim from top tier nodes entirely in shrink_folio_list()
2173 * mlocked or too hot to reclaim. We can disable reclaim in shrink_folio_list()
2174 * from top tier nodes in proactive reclaim though as that is in shrink_folio_list()
2266 * It is waste of effort to scan and reclaim CMA pages if it is not available
2299 * @sc: The scan_control struct for this reclaim session
2339 * ineligible folios. This causes the VM to not reclaim any in isolate_lru_folios()
2968 * thrashing, try to reclaim those first before touching in prepare_scan_count()
3006 * runaway file reclaim problem, but rather just in prepare_scan_count()
3007 * extreme pressure. Reclaim as per usual then. in prepare_scan_count()
3045 * Global reclaim will swap to prevent OOM even with no in get_scan_count()
3075 * If there is enough inactive page cache, we do not reclaim in get_scan_count()
3126 * Scale a cgroup's reclaim pressure by proportioning in get_scan_count()
3142 * There is one special case: in the first reclaim pass, in get_scan_count()
3144 * protection. If that fails to reclaim enough pages to in get_scan_count()
3145 * satisfy the reclaim goal, we come back and override in get_scan_count()
3149 * equally. As such, we reclaim them based on how much in get_scan_count()
3173 * reclaim moving forwards, avoiding decrementing in get_scan_count()
3197 * their relative recent reclaim efficiency. in get_scan_count()
3224 * ultimately no way to reclaim the memory.
4607 /* check the order to exclude compaction-induced reclaim */ in lru_gen_age_node()
5332 * reclaim.
5359 /* don't abort memcg reclaim to ensure fairness */ in should_abort_scan()
5366 /* check the order to exclude compaction-induced reclaim */ in should_abort_scan()
5601 * them is likely futile and can cause high reclaim latency when there in lru_gen_shrink_node()
6330 * Global reclaiming within direct reclaim at DEF_PRIORITY is a normal in shrink_lruvec()
6336 * do a batch of work at once. For memcg reclaim one check is made to in shrink_lruvec()
6337 * abort proportional reclaim if either the file or anon lru has already in shrink_lruvec()
6365 * For kswapd and memcg, reclaim at least the number of pages in shrink_lruvec()
6426 /* Use reclaim/compaction for costly allocs or under memory pressure */
6438 * Reclaim/compaction is used for high-order allocation requests. It reclaims
6452 /* If not in reclaim/compaction mode, stop */ in should_continue_reclaim()
6457 * Stop if we failed to reclaim any pages from the last SWAP_CLUSTER_MAX in should_continue_reclaim()
6459 * with the risk reclaim/compaction and the resulting allocation attempt in should_continue_reclaim()
6509 * aren't eligible for reclaim - either because they in shrink_node_memcgs()
6545 /* Record the group's reclaim efficiency */ in shrink_node_memcgs()
6581 /* Record the subtree's reclaim efficiency */ in shrink_node()
6591 * If reclaim is isolating dirty pages under writeback, in shrink_node()
6597 * device. The only option is to throttle from reclaim in shrink_node()
6604 * immediate reclaim and stall if any are encountered in shrink_node()
6610 /* Allow kswapd to start writing pages during reclaim.*/ in shrink_node()
6616 * reclaim and under writeback (nr_immediate), it in shrink_node()
6627 * for writeback and immediate reclaim (counted in nr.congested). in shrink_node()
6641 * Stall direct reclaim for IO completions if the lruvec is in shrink_node()
6657 * many failures to reclaim anything from them and goes to in shrink_node()
6658 * sleep. On reclaim progress, reset the failure counter. A in shrink_node()
6659 * successful direct reclaim run will revive a dormant kswapd. in shrink_node()
6668 * should reclaim first.
6679 /* Compaction cannot yet proceed. Do reclaim. */ in compaction_ready()
6686 * with reclaim to make a buffer of free pages available to give in compaction_ready()
6688 * Note that we won't actually reclaim the whole buffer in one attempt in compaction_ready()
6690 * we are already above the high+gap watermark, don't reclaim at all. in compaction_ready()
6700 * If reclaim is making progress greater than 12% efficiency then in consider_reclaim_throttle()
6714 * Do not throttle kswapd or cgroup reclaim on NOPROGRESS as it will in consider_reclaim_throttle()
6716 * under writeback and marked for immediate reclaim at the tail of the in consider_reclaim_throttle()
6728 * This is the direct reclaim path, for page-allocating processes. We only
6729 * try to reclaim pages from zones which will satisfy the caller's allocation
6747 * allowed level, force direct reclaim to scan the highmem zone as in shrink_zones()
6810 /* See comment about same check for global reclaim above */ in shrink_zones()
6843 * This is the main entry point to direct page reclaim.
6915 /* Aborted reclaim to try compaction? don't OOM, then */ in do_try_to_free_pages()
6923 * memory from reclaim. Neither of which are very common, so in do_try_to_free_pages()
7005 * responsible for cleaning pages necessary for reclaim to make forward in throttle_direct_reclaim()
7006 * progress. kjournald for example may enter direct reclaim while in throttle_direct_reclaim()
7028 * is an affinity then between processes waking up and where reclaim in throttle_direct_reclaim()
7101 * Do not enter reclaim if fatal signal was delivered while throttled. in try_to_free_pages()
7121 /* Only used by soft limit reclaim. Do not reuse for anything else. */
7148 * if we don't reclaim here, the shrink_node from balance_pgdat in mem_cgroup_shrink_node()
7183 * the reclaim does not bail out early. in try_to_free_mem_cgroup_pages()
7235 * should not be checked at the same time as reclaim would in pgdat_watermark_boosted()
7326 /* Hopeless node, leave it to direct reclaim */ in prepare_kswapd_sleep()
7343 * reclaim or if the lack of progress was due to pages under writeback.
7352 /* Reclaim a number of pages proportional to the number of zones */ in kswapd_shrink_node()
7372 * excessive reclaim. Assume that a process requested a high-order in kswapd_shrink_node()
7373 * can direct reclaim/compact. in kswapd_shrink_node()
7381 /* Page allocator PCP high watermark is lowered if reclaim is active. */
7414 * For kswapd, balance_pgdat() will reclaim pages across a node from zones
7423 * or lower is eligible for reclaim until at least one usable zone is
7449 * Account for the reclaim boost. Note that the zone boost is left in in balance_pgdat()
7451 * stall or direct reclaim until kswapd is finished. in balance_pgdat()
7481 * buffers can relieve lowmem pressure. Reclaim may still not in balance_pgdat()
7483 * request are balanced to avoid excessive reclaim from kswapd. in balance_pgdat()
7500 * on the grounds that the normal reclaim should be enough to in balance_pgdat()
7510 * If boosting is not active then only reclaim if there are no in balance_pgdat()
7517 /* Limit the priority of boosting to avoid reclaim writeback */ in balance_pgdat()
7522 * Do not writeback or swap pages for boosted reclaim. The in balance_pgdat()
7524 * from reclaim context. If no pages are reclaimed, the in balance_pgdat()
7525 * reclaim will be aborted. in balance_pgdat()
7544 /* Call soft limit reclaim before calling shrink_node. */ in balance_pgdat()
7583 * If reclaim made no progress for a boost, stop reclaim as in balance_pgdat()
7600 /* If reclaim was boosted, account for the reclaim done in this pass */ in balance_pgdat()
7640 * sleep after previous reclaim attempt (node is still unbalanced). In that
7641 * case return the zone index of the previous kswapd reclaim cycle.
7665 * deliberate on the assumption that if reclaim cannot keep an in kswapd_try_to_sleep()
7807 * Reclaim begins at the requested order but if a high-order in kswapd()
7808 * reclaim fails then kswapd falls back to reclaiming for in kswapd()
7829 * kswapd should reclaim (direct reclaim is deferred), wake it up for the zone's
7830 * pgdat. It will wake up kcompactd after reclaiming memory. If kswapd reclaim
7858 /* Hopeless node, leave it to direct reclaim if possible */ in wakeup_kswapd()
7970 * Node reclaim mode
7992 * slab reclaim needs to occur.
8010 /* Work out how many page cache pages we can reclaim in this reclaim_mode */
8039 * Try to free up some pages from this node through reclaim.
8097 * Node reclaim reclaims unmapped file backed pages and in node_reclaim()
8102 * thrown out if the node is overallocated. So we do not reclaim in node_reclaim()
8118 * Only run node reclaim on the local node or on nodes that do not in node_reclaim()