Lines Matching full:reclaim
204 * As the data only determines if reclaim or compaction continues, it is
423 * we will try to reclaim all available objects, otherwise we can end in do_shrink_slab()
431 * scanning at high prio and therefore should try to reclaim as much as in do_shrink_slab()
560 * @priority: the reclaim priority
852 * inode reclaim needs to empty out the radix tree or in __remove_mapping()
972 /* Reclaim if clean, defer dirty pages to writeback */ in page_check_references()
987 * from reclaim context. Do not stall reclaim based on them in page_check_dirty_writeback()
1073 * pages marked for immediate reclaim are making it to the in shrink_page_list()
1086 * 1) If reclaim is encountering an excessive number of pages in shrink_page_list()
1096 * 2) Global or new memcg reclaim encounters a page that is in shrink_page_list()
1097 * not marked for immediate reclaim, or the caller does not in shrink_page_list()
1100 * reclaim and continue scanning. in shrink_page_list()
1104 * enter reclaim, and deadlock if it waits on a page for in shrink_page_list()
1113 * reclaim. Wait for the writeback to complete. in shrink_page_list()
1120 * Since they're marked for immediate reclaim, they won't put in shrink_page_list()
1142 * reclaim reaches the tests above, so it will in shrink_page_list()
1144 * and it's also appropriate in global reclaim. in shrink_page_list()
1172 ; /* try to reclaim the page below */ in shrink_page_list()
1271 * Immediately reclaim when written back. in shrink_page_list()
1310 * ahead and try to reclaim the page. in shrink_page_list()
1409 /* Not a candidate for swapping, so reclaim swap space. */ in shrink_page_list()
1588 * @sc: The scan_control struct for this reclaim session
1629 * ineligible pages. This causes the VM to not reclaim any in isolate_lru_pages()
2236 * Global reclaim will swap to prevent OOM even with no in get_scan_count()
2266 * If there is enough inactive page cache, we do not reclaim in get_scan_count()
2317 * Scale a cgroup's reclaim pressure by proportioning in get_scan_count()
2333 * There is one special case: in the first reclaim pass, in get_scan_count()
2335 * protection. If that fails to reclaim enough pages to in get_scan_count()
2336 * satisfy the reclaim goal, we come back and override in get_scan_count()
2340 * equally. As such, we reclaim them based on how much in get_scan_count()
2364 * reclaim moving forwards, avoiding decrementing in get_scan_count()
2388 * their relative recent reclaim efficiency. in get_scan_count()
2430 * Global reclaiming within direct reclaim at DEF_PRIORITY is a normal in shrink_lruvec()
2436 * do a batch of work at once. For memcg reclaim one check is made to in shrink_lruvec()
2437 * abort proportional reclaim if either the file or anon lru has already in shrink_lruvec()
2465 * For kswapd and memcg, reclaim at least the number of pages in shrink_lruvec()
2526 /* Use reclaim/compaction for costly allocs or under memory pressure */
2538 * Reclaim/compaction is used for high-order allocation requests. It reclaims
2552 /* If not in reclaim/compaction mode, stop */ in should_continue_reclaim()
2557 * Stop if we failed to reclaim any pages from the last SWAP_CLUSTER_MAX in should_continue_reclaim()
2559 * with the risk reclaim/compaction and the resulting allocation attempt in should_continue_reclaim()
2611 * aren't eligible for reclaim - either because they in shrink_node_memcgs()
2647 /* Record the group's reclaim efficiency */ in shrink_node_memcgs()
2711 * thrashing, try to reclaim those first before touching in shrink_node()
2748 * runaway file reclaim problem, but rather just in shrink_node()
2749 * extreme pressure. Reclaim as per usual then. in shrink_node()
2766 /* Record the subtree's reclaim efficiency */ in shrink_node()
2776 * If reclaim is isolating dirty pages under writeback, in shrink_node()
2782 * device. The only option is to throttle from reclaim in shrink_node()
2789 * immediate reclaim and stall if any are encountered in shrink_node()
2795 /* Allow kswapd to start writing pages during reclaim.*/ in shrink_node()
2801 * reclaim and under writeback (nr_immediate), it in shrink_node()
2823 * Stall direct reclaim for IO completions if underlying BDIs in shrink_node()
2839 * many failures to reclaim anything from them and goes to in shrink_node()
2840 * sleep. On reclaim progress, reset the failure counter. A in shrink_node()
2841 * successful direct reclaim run will revive a dormant kswapd. in shrink_node()
2851 * should reclaim first.
2860 /* Allocation should succeed already. Don't reclaim. */ in compaction_ready()
2863 /* Compaction cannot yet proceed. Do reclaim. */ in compaction_ready()
2869 * with reclaim to make a buffer of free pages available to give in compaction_ready()
2871 * Note that we won't actually reclaim the whole buffer in one attempt in compaction_ready()
2873 * we are already above the high+gap watermark, don't reclaim at all. in compaction_ready()
2881 * This is the direct reclaim path, for page-allocating processes. We only
2882 * try to reclaim pages from zones which will satisfy the caller's allocation
2899 * allowed level, force direct reclaim to scan the highmem zone as in shrink_zones()
2959 /* See comment about same check for global reclaim above */ in shrink_zones()
2998 * This is the main entry point to direct page reclaim.
3069 /* Aborted reclaim to try compaction? don't OOM, then */ in do_try_to_free_pages()
3077 * memory from reclaim. Neither of which are very common, so in do_try_to_free_pages()
3159 * responsible for cleaning pages necessary for reclaim to make forward in throttle_direct_reclaim()
3160 * progress. kjournald for example may enter direct reclaim while in throttle_direct_reclaim()
3182 * is an affinity then between processes waking up and where reclaim in throttle_direct_reclaim()
3259 * Do not enter reclaim if fatal signal was delivered while throttled. in try_to_free_pages()
3279 /* Only used by soft limit reclaim. Do not reuse for anything else. */
3309 * if we don't reclaim here, the shrink_node from balance_pgdat in mem_cgroup_shrink_node()
3353 * the reclaim does not bail out early. in try_to_free_mem_cgroup_pages()
3401 * should not be checked at the same time as reclaim would in pgdat_watermark_boosted()
3488 /* Hopeless node, leave it to direct reclaim */ in prepare_kswapd_sleep()
3505 * reclaim or if the lack of progress was due to pages under writeback.
3514 /* Reclaim a number of pages proportional to the number of zones */ in kswapd_shrink_node()
3538 * excessive reclaim. Assume that a process requested a high-order in kswapd_shrink_node()
3539 * can direct reclaim/compact. in kswapd_shrink_node()
3548 * For kswapd, balance_pgdat() will reclaim pages across a node from zones
3557 * or lower is eligible for reclaim until at least one usable zone is
3583 * Account for the reclaim boost. Note that the zone boost is left in in balance_pgdat()
3585 * stall or direct reclaim until kswapd is finished. in balance_pgdat()
3614 * buffers can relieve lowmem pressure. Reclaim may still not in balance_pgdat()
3616 * request are balanced to avoid excessive reclaim from kswapd. in balance_pgdat()
3633 * on the grounds that the normal reclaim should be enough to in balance_pgdat()
3643 * If boosting is not active then only reclaim if there are no in balance_pgdat()
3650 /* Limit the priority of boosting to avoid reclaim writeback */ in balance_pgdat()
3655 * Do not writeback or swap pages for boosted reclaim. The in balance_pgdat()
3657 * from reclaim context. If no pages are reclaimed, the in balance_pgdat()
3658 * reclaim will be aborted. in balance_pgdat()
3678 /* Call soft limit reclaim before calling shrink_node. */ in balance_pgdat()
3717 * If reclaim made no progress for a boost, stop reclaim as in balance_pgdat()
3732 /* If reclaim was boosted, account for the reclaim done in this pass */ in balance_pgdat()
3772 * sleep after previous reclaim attempt (node is still unbalanced). In that
3773 * case return the zone index of the previous kswapd reclaim cycle.
3797 * deliberate on the assumption that if reclaim cannot keep an in kswapd_try_to_sleep()
3941 * Reclaim begins at the requested order but if a high-order in kswapd()
3942 * reclaim fails then kswapd falls back to reclaiming for in kswapd()
3972 * kswapd should reclaim (direct reclaim is deferred), wake it up for the zone's
3973 * pgdat. It will wake up kcompactd after reclaiming memory. If kswapd reclaim
4001 /* Hopeless node, leave it to direct reclaim if possible */ in wakeup_kswapd()
4119 * Node reclaim mode
4131 #define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */
4132 #define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */
4149 * slab reclaim needs to occur.
4167 /* Work out how many page cache pages we can reclaim in this reclaim_mode */
4196 * Try to free up some pages from this node through reclaim.
4261 * Node reclaim reclaims unmapped file backed pages and in node_reclaim()
4266 * thrown out if the node is overallocated. So we do not reclaim in node_reclaim()
4282 * Only run node reclaim on the local node or on nodes that do not in node_reclaim()
4376 pr_info("reclaim %lu purgeable pages.\n", nr); in purgeable_node()