• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1.. _unevictable_lru:
2
3==============================
4Unevictable LRU Infrastructure
5==============================
6
7.. contents:: :local:
8
9
10Introduction
11============
12
13This document describes the Linux memory manager's "Unevictable LRU"
14infrastructure and the use of this to manage several types of "unevictable"
15pages.
16
17The document attempts to provide the overall rationale behind this mechanism
18and the rationale for some of the design decisions that drove the
19implementation.  The latter design rationale is discussed in the context of an
20implementation description.  Admittedly, one can obtain the implementation
21details - the "what does it do?" - by reading the code.  One hopes that the
22descriptions below add value by provide the answer to "why does it do that?".
23
24
25
26The Unevictable LRU
27===================
28
29The Unevictable LRU facility adds an additional LRU list to track unevictable
30pages and to hide these pages from vmscan.  This mechanism is based on a patch
31by Larry Woodman of Red Hat to address several scalability problems with page
32reclaim in Linux.  The problems have been observed at customer sites on large
33memory x86_64 systems.
34
35To illustrate this with an example, a non-NUMA x86_64 platform with 128GB of
36main memory will have over 32 million 4k pages in a single node.  When a large
37fraction of these pages are not evictable for any reason [see below], vmscan
38will spend a lot of time scanning the LRU lists looking for the small fraction
39of pages that are evictable.  This can result in a situation where all CPUs are
40spending 100% of their time in vmscan for hours or days on end, with the system
41completely unresponsive.
42
43The unevictable list addresses the following classes of unevictable pages:
44
45 * Those owned by ramfs.
46
47 * Those mapped into SHM_LOCK'd shared memory regions.
48
49 * Those mapped into VM_LOCKED [mlock()ed] VMAs.
50
51The infrastructure may also be able to handle other conditions that make pages
52unevictable, either by definition or by circumstance, in the future.
53
54
55The Unevictable Page List
56-------------------------
57
58The Unevictable LRU infrastructure consists of an additional, per-node, LRU list
59called the "unevictable" list and an associated page flag, PG_unevictable, to
60indicate that the page is being managed on the unevictable list.
61
62The PG_unevictable flag is analogous to, and mutually exclusive with, the
63PG_active flag in that it indicates on which LRU list a page resides when
64PG_lru is set.
65
66The Unevictable LRU infrastructure maintains unevictable pages on an additional
67LRU list for a few reasons:
68
69 (1) We get to "treat unevictable pages just like we treat other pages in the
70     system - which means we get to use the same code to manipulate them, the
71     same code to isolate them (for migrate, etc.), the same code to keep track
72     of the statistics, etc..." [Rik van Riel]
73
74 (2) We want to be able to migrate unevictable pages between nodes for memory
75     defragmentation, workload management and memory hotplug.  The linux kernel
76     can only migrate pages that it can successfully isolate from the LRU
77     lists.  If we were to maintain pages elsewhere than on an LRU-like list,
78     where they can be found by isolate_lru_page(), we would prevent their
79     migration, unless we reworked migration code to find the unevictable pages
80     itself.
81
82
83The unevictable list does not differentiate between file-backed and anonymous,
84swap-backed pages.  This differentiation is only important while the pages are,
85in fact, evictable.
86
87The unevictable list benefits from the "arrayification" of the per-node LRU
88lists and statistics originally proposed and posted by Christoph Lameter.
89
90
91Memory Control Group Interaction
92--------------------------------
93
94The unevictable LRU facility interacts with the memory control group [aka
95memory controller; see Documentation/admin-guide/cgroup-v1/memory.rst] by extending the
96lru_list enum.
97
98The memory controller data structure automatically gets a per-node unevictable
99list as a result of the "arrayification" of the per-node LRU lists (one per
100lru_list enum element).  The memory controller tracks the movement of pages to
101and from the unevictable list.
102
103When a memory control group comes under memory pressure, the controller will
104not attempt to reclaim pages on the unevictable list.  This has a couple of
105effects:
106
107 (1) Because the pages are "hidden" from reclaim on the unevictable list, the
108     reclaim process can be more efficient, dealing only with pages that have a
109     chance of being reclaimed.
110
111 (2) On the other hand, if too many of the pages charged to the control group
112     are unevictable, the evictable portion of the working set of the tasks in
113     the control group may not fit into the available memory.  This can cause
114     the control group to thrash or to OOM-kill tasks.
115
116
117.. _mark_addr_space_unevict:
118
119Marking Address Spaces Unevictable
120----------------------------------
121
122For facilities such as ramfs none of the pages attached to the address space
123may be evicted.  To prevent eviction of any such pages, the AS_UNEVICTABLE
124address space flag is provided, and this can be manipulated by a filesystem
125using a number of wrapper functions:
126
127 * ``void mapping_set_unevictable(struct address_space *mapping);``
128
129	Mark the address space as being completely unevictable.
130
131 * ``void mapping_clear_unevictable(struct address_space *mapping);``
132
133	Mark the address space as being evictable.
134
135 * ``int mapping_unevictable(struct address_space *mapping);``
136
137	Query the address space, and return true if it is completely
138	unevictable.
139
140These are currently used in three places in the kernel:
141
142 (1) By ramfs to mark the address spaces of its inodes when they are created,
143     and this mark remains for the life of the inode.
144
145 (2) By SYSV SHM to mark SHM_LOCK'd address spaces until SHM_UNLOCK is called.
146
147     Note that SHM_LOCK is not required to page in the locked pages if they're
148     swapped out; the application must touch the pages manually if it wants to
149     ensure they're in memory.
150
151 (3) By the i915 driver to mark pinned address space until it's unpinned. The
152     amount of unevictable memory marked by i915 driver is roughly the bounded
153     object size in debugfs/dri/0/i915_gem_objects.
154
155
156Detecting Unevictable Pages
157---------------------------
158
159The function page_evictable() in vmscan.c determines whether a page is
160evictable or not using the query function outlined above [see section
161:ref:`Marking address spaces unevictable <mark_addr_space_unevict>`]
162to check the AS_UNEVICTABLE flag.
163
164For address spaces that are so marked after being populated (as SHM regions
165might be), the lock action (eg: SHM_LOCK) can be lazy, and need not populate
166the page tables for the region as does, for example, mlock(), nor need it make
167any special effort to push any pages in the SHM_LOCK'd area to the unevictable
168list.  Instead, vmscan will do this if and when it encounters the pages during
169a reclamation scan.
170
171On an unlock action (such as SHM_UNLOCK), the unlocker (eg: shmctl()) must scan
172the pages in the region and "rescue" them from the unevictable list if no other
173condition is keeping them unevictable.  If an unevictable region is destroyed,
174the pages are also "rescued" from the unevictable list in the process of
175freeing them.
176
177page_evictable() also checks for mlocked pages by testing an additional page
178flag, PG_mlocked (as wrapped by PageMlocked()), which is set when a page is
179faulted into a VM_LOCKED vma, or found in a vma being VM_LOCKED.
180
181
182Vmscan's Handling of Unevictable Pages
183--------------------------------------
184
185If unevictable pages are culled in the fault path, or moved to the unevictable
186list at mlock() or mmap() time, vmscan will not encounter the pages until they
187have become evictable again (via munlock() for example) and have been "rescued"
188from the unevictable list.  However, there may be situations where we decide,
189for the sake of expediency, to leave a unevictable page on one of the regular
190active/inactive LRU lists for vmscan to deal with.  vmscan checks for such
191pages in all of the shrink_{active|inactive|page}_list() functions and will
192"cull" such pages that it encounters: that is, it diverts those pages to the
193unevictable list for the node being scanned.
194
195There may be situations where a page is mapped into a VM_LOCKED VMA, but the
196page is not marked as PG_mlocked.  Such pages will make it all the way to
197shrink_page_list() where they will be detected when vmscan walks the reverse
198map in try_to_unmap().  If try_to_unmap() returns SWAP_MLOCK,
199shrink_page_list() will cull the page at that point.
200
201To "cull" an unevictable page, vmscan simply puts the page back on the LRU list
202using putback_lru_page() - the inverse operation to isolate_lru_page() - after
203dropping the page lock.  Because the condition which makes the page unevictable
204may change once the page is unlocked, putback_lru_page() will recheck the
205unevictable state of a page that it places on the unevictable list.  If the
206page has become unevictable, putback_lru_page() removes it from the list and
207retries, including the page_unevictable() test.  Because such a race is a rare
208event and movement of pages onto the unevictable list should be rare, these
209extra evictabilty checks should not occur in the majority of calls to
210putback_lru_page().
211
212
213MLOCKED Pages
214=============
215
216The unevictable page list is also useful for mlock(), in addition to ramfs and
217SYSV SHM.  Note that mlock() is only available in CONFIG_MMU=y situations; in
218NOMMU situations, all mappings are effectively mlocked.
219
220
221History
222-------
223
224The "Unevictable mlocked Pages" infrastructure is based on work originally
225posted by Nick Piggin in an RFC patch entitled "mm: mlocked pages off LRU".
226Nick posted his patch as an alternative to a patch posted by Christoph Lameter
227to achieve the same objective: hiding mlocked pages from vmscan.
228
229In Nick's patch, he used one of the struct page LRU list link fields as a count
230of VM_LOCKED VMAs that map the page.  This use of the link field for a count
231prevented the management of the pages on an LRU list, and thus mlocked pages
232were not migratable as isolate_lru_page() could not find them, and the LRU list
233link field was not available to the migration subsystem.
234
235Nick resolved this by putting mlocked pages back on the lru list before
236attempting to isolate them, thus abandoning the count of VM_LOCKED VMAs.  When
237Nick's patch was integrated with the Unevictable LRU work, the count was
238replaced by walking the reverse map to determine whether any VM_LOCKED VMAs
239mapped the page.  More on this below.
240
241
242Basic Management
243----------------
244
245mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable
246pages.  When such a page has been "noticed" by the memory management subsystem,
247the page is marked with the PG_mlocked flag.  This can be manipulated using the
248PageMlocked() functions.
249
250A PG_mlocked page will be placed on the unevictable list when it is added to
251the LRU.  Such pages can be "noticed" by memory management in several places:
252
253 (1) in the mlock()/mlockall() system call handlers;
254
255 (2) in the mmap() system call handler when mmapping a region with the
256     MAP_LOCKED flag;
257
258 (3) mmapping a region in a task that has called mlockall() with the MCL_FUTURE
259     flag
260
261 (4) in the fault path, if mlocked pages are "culled" in the fault path,
262     and when a VM_LOCKED stack segment is expanded; or
263
264 (5) as mentioned above, in vmscan:shrink_page_list() when attempting to
265     reclaim a page in a VM_LOCKED VMA via try_to_unmap()
266
267all of which result in the VM_LOCKED flag being set for the VMA if it doesn't
268already have it set.
269
270mlocked pages become unlocked and rescued from the unevictable list when:
271
272 (1) mapped in a range unlocked via the munlock()/munlockall() system calls;
273
274 (2) munmap()'d out of the last VM_LOCKED VMA that maps the page, including
275     unmapping at task exit;
276
277 (3) when the page is truncated from the last VM_LOCKED VMA of an mmapped file;
278     or
279
280 (4) before a page is COW'd in a VM_LOCKED VMA.
281
282
283mlock()/mlockall() System Call Handling
284---------------------------------------
285
286Both [do\_]mlock() and [do\_]mlockall() system call handlers call mlock_fixup()
287for each VMA in the range specified by the call.  In the case of mlockall(),
288this is the entire active address space of the task.  Note that mlock_fixup()
289is used for both mlocking and munlocking a range of memory.  A call to mlock()
290an already VM_LOCKED VMA, or to munlock() a VMA that is not VM_LOCKED is
291treated as a no-op, and mlock_fixup() simply returns.
292
293If the VMA passes some filtering as described in "Filtering Special Vmas"
294below, mlock_fixup() will attempt to merge the VMA with its neighbors or split
295off a subset of the VMA if the range does not cover the entire VMA.  Once the
296VMA has been merged or split or neither, mlock_fixup() will call
297populate_vma_page_range() to fault in the pages via get_user_pages() and to
298mark the pages as mlocked via mlock_vma_page().
299
300Note that the VMA being mlocked might be mapped with PROT_NONE.  In this case,
301get_user_pages() will be unable to fault in the pages.  That's okay.  If pages
302do end up getting faulted into this VM_LOCKED VMA, we'll handle them in the
303fault path or in vmscan.
304
305Also note that a page returned by get_user_pages() could be truncated or
306migrated out from under us, while we're trying to mlock it.  To detect this,
307populate_vma_page_range() checks page_mapping() after acquiring the page lock.
308If the page is still associated with its mapping, we'll go ahead and call
309mlock_vma_page().  If the mapping is gone, we just unlock the page and move on.
310In the worst case, this will result in a page mapped in a VM_LOCKED VMA
311remaining on a normal LRU list without being PageMlocked().  Again, vmscan will
312detect and cull such pages.
313
314mlock_vma_page() will call TestSetPageMlocked() for each page returned by
315get_user_pages().  We use TestSetPageMlocked() because the page might already
316be mlocked by another task/VMA and we don't want to do extra work.  We
317especially do not want to count an mlocked page more than once in the
318statistics.  If the page was already mlocked, mlock_vma_page() need do nothing
319more.
320
321If the page was NOT already mlocked, mlock_vma_page() attempts to isolate the
322page from the LRU, as it is likely on the appropriate active or inactive list
323at that time.  If the isolate_lru_page() succeeds, mlock_vma_page() will put
324back the page - by calling putback_lru_page() - which will notice that the page
325is now mlocked and divert the page to the node's unevictable list.  If
326mlock_vma_page() is unable to isolate the page from the LRU, vmscan will handle
327it later if and when it attempts to reclaim the page.
328
329
330Filtering Special VMAs
331----------------------
332
333mlock_fixup() filters several classes of "special" VMAs:
334
3351) VMAs with VM_IO or VM_PFNMAP set are skipped entirely.  The pages behind
336   these mappings are inherently pinned, so we don't need to mark them as
337   mlocked.  In any case, most of the pages have no struct page in which to so
338   mark the page.  Because of this, get_user_pages() will fail for these VMAs,
339   so there is no sense in attempting to visit them.
340
3412) VMAs mapping hugetlbfs page are already effectively pinned into memory.  We
342   neither need nor want to mlock() these pages.  However, to preserve the
343   prior behavior of mlock() - before the unevictable/mlock changes -
344   mlock_fixup() will call make_pages_present() in the hugetlbfs VMA range to
345   allocate the huge pages and populate the ptes.
346
3473) VMAs with VM_DONTEXPAND are generally userspace mappings of kernel pages,
348   such as the VDSO page, relay channel pages, etc. These pages
349   are inherently unevictable and are not managed on the LRU lists.
350   mlock_fixup() treats these VMAs the same as hugetlbfs VMAs.  It calls
351   make_pages_present() to populate the ptes.
352
353Note that for all of these special VMAs, mlock_fixup() does not set the
354VM_LOCKED flag.  Therefore, we won't have to deal with them later during
355munlock(), munmap() or task exit.  Neither does mlock_fixup() account these
356VMAs against the task's "locked_vm".
357
358.. _munlock_munlockall_handling:
359
360munlock()/munlockall() System Call Handling
361-------------------------------------------
362
363The munlock() and munlockall() system calls are handled by the same functions -
364do_mlock[all]() - as the mlock() and mlockall() system calls with the unlock vs
365lock operation indicated by an argument.  So, these system calls are also
366handled by mlock_fixup().  Again, if called for an already munlocked VMA,
367mlock_fixup() simply returns.  Because of the VMA filtering discussed above,
368VM_LOCKED will not be set in any "special" VMAs.  So, these VMAs will be
369ignored for munlock.
370
371If the VMA is VM_LOCKED, mlock_fixup() again attempts to merge or split off the
372specified range.  The range is then munlocked via the function
373populate_vma_page_range() - the same function used to mlock a VMA range -
374passing a flag to indicate that munlock() is being performed.
375
376Because the VMA access protections could have been changed to PROT_NONE after
377faulting in and mlocking pages, get_user_pages() was unreliable for visiting
378these pages for munlocking.  Because we don't want to leave pages mlocked,
379get_user_pages() was enhanced to accept a flag to ignore the permissions when
380fetching the pages - all of which should be resident as a result of previous
381mlocking.
382
383For munlock(), populate_vma_page_range() unlocks individual pages by calling
384munlock_vma_page().  munlock_vma_page() unconditionally clears the PG_mlocked
385flag using TestClearPageMlocked().  As with mlock_vma_page(),
386munlock_vma_page() use the Test*PageMlocked() function to handle the case where
387the page might have already been unlocked by another task.  If the page was
388mlocked, munlock_vma_page() updates that zone statistics for the number of
389mlocked pages.  Note, however, that at this point we haven't checked whether
390the page is mapped by other VM_LOCKED VMAs.
391
392We can't call page_mlock(), the function that walks the reverse map to
393check for other VM_LOCKED VMAs, without first isolating the page from the LRU.
394page_mlock() is a variant of try_to_unmap() and thus requires that the page
395not be on an LRU list [more on these below].  However, the call to
396isolate_lru_page() could fail, in which case we can't call page_mlock().  So,
397we go ahead and clear PG_mlocked up front, as this might be the only chance we
398have.  If we can successfully isolate the page, we go ahead and call
399page_mlock(), which will restore the PG_mlocked flag and update the zone
400page statistics if it finds another VMA holding the page mlocked.  If we fail
401to isolate the page, we'll have left a potentially mlocked page on the LRU.
402This is fine, because we'll catch it later if and if vmscan tries to reclaim
403the page.  This should be relatively rare.
404
405
406Migrating MLOCKED Pages
407-----------------------
408
409A page that is being migrated has been isolated from the LRU lists and is held
410locked across unmapping of the page, updating the page's address space entry
411and copying the contents and state, until the page table entry has been
412replaced with an entry that refers to the new page.  Linux supports migration
413of mlocked pages and other unevictable pages.  This involves simply moving the
414PG_mlocked and PG_unevictable states from the old page to the new page.
415
416Note that page migration can race with mlocking or munlocking of the same page.
417This has been discussed from the mlock/munlock perspective in the respective
418sections above.  Both processes (migration and m[un]locking) hold the page
419locked.  This provides the first level of synchronization.  Page migration
420zeros out the page_mapping of the old page before unlocking it, so m[un]lock
421can skip these pages by testing the page mapping under page lock.
422
423To complete page migration, we place the new and old pages back onto the LRU
424after dropping the page lock.  The "unneeded" page - old page on success, new
425page on failure - will be freed when the reference count held by the migration
426process is released.  To ensure that we don't strand pages on the unevictable
427list because of a race between munlock and migration, page migration uses the
428putback_lru_page() function to add migrated pages back to the LRU.
429
430
431Compacting MLOCKED Pages
432------------------------
433
434The unevictable LRU can be scanned for compactable regions and the default
435behavior is to do so.  /proc/sys/vm/compact_unevictable_allowed controls
436this behavior (see Documentation/admin-guide/sysctl/vm.rst).  Once scanning of the
437unevictable LRU is enabled, the work of compaction is mostly handled by
438the page migration code and the same work flow as described in MIGRATING
439MLOCKED PAGES will apply.
440
441MLOCKING Transparent Huge Pages
442-------------------------------
443
444A transparent huge page is represented by a single entry on an LRU list.
445Therefore, we can only make unevictable an entire compound page, not
446individual subpages.
447
448If a user tries to mlock() part of a huge page, we want the rest of the
449page to be reclaimable.
450
451We cannot just split the page on partial mlock() as split_huge_page() can
452fail and new intermittent failure mode for the syscall is undesirable.
453
454We handle this by keeping PTE-mapped huge pages on normal LRU lists: the
455PMD on border of VM_LOCKED VMA will be split into PTE table.
456
457This way the huge page is accessible for vmscan. Under memory pressure the
458page will be split, subpages which belong to VM_LOCKED VMAs will be moved
459to unevictable LRU and the rest can be reclaimed.
460
461See also comment in follow_trans_huge_pmd().
462
463mmap(MAP_LOCKED) System Call Handling
464-------------------------------------
465
466In addition the mlock()/mlockall() system calls, an application can request
467that a region of memory be mlocked supplying the MAP_LOCKED flag to the mmap()
468call. There is one important and subtle difference here, though. mmap() + mlock()
469will fail if the range cannot be faulted in (e.g. because mm_populate fails)
470and returns with ENOMEM while mmap(MAP_LOCKED) will not fail. The mmaped
471area will still have properties of the locked area - aka. pages will not get
472swapped out - but major page faults to fault memory in might still happen.
473
474Furthermore, any mmap() call or brk() call that expands the heap by a
475task that has previously called mlockall() with the MCL_FUTURE flag will result
476in the newly mapped memory being mlocked.  Before the unevictable/mlock
477changes, the kernel simply called make_pages_present() to allocate pages and
478populate the page table.
479
480To mlock a range of memory under the unevictable/mlock infrastructure, the
481mmap() handler and task address space expansion functions call
482populate_vma_page_range() specifying the vma and the address range to mlock.
483
484The callers of populate_vma_page_range() will have already added the memory range
485to be mlocked to the task's "locked_vm".  To account for filtered VMAs,
486populate_vma_page_range() returns the number of pages NOT mlocked.  All of the
487callers then subtract a non-negative return value from the task's locked_vm.  A
488negative return value represent an error - for example, from get_user_pages()
489attempting to fault in a VMA with PROT_NONE access.  In this case, we leave the
490memory range accounted as locked_vm, as the protections could be changed later
491and pages allocated into that region.
492
493
494munmap()/exit()/exec() System Call Handling
495-------------------------------------------
496
497When unmapping an mlocked region of memory, whether by an explicit call to
498munmap() or via an internal unmap from exit() or exec() processing, we must
499munlock the pages if we're removing the last VM_LOCKED VMA that maps the pages.
500Before the unevictable/mlock changes, mlocking did not mark the pages in any
501way, so unmapping them required no processing.
502
503To munlock a range of memory under the unevictable/mlock infrastructure, the
504munmap() handler and task address space call tear down function
505munlock_vma_pages_all().  The name reflects the observation that one always
506specifies the entire VMA range when munlock()ing during unmap of a region.
507Because of the VMA filtering when mlocking() regions, only "normal" VMAs that
508actually contain mlocked pages will be passed to munlock_vma_pages_all().
509
510munlock_vma_pages_all() clears the VM_LOCKED VMA flag and, like mlock_fixup()
511for the munlock case, calls __munlock_vma_pages_range() to walk the page table
512for the VMA's memory range and munlock_vma_page() each resident page mapped by
513the VMA.  This effectively munlocks the page, only if this is the last
514VM_LOCKED VMA that maps the page.
515
516
517try_to_unmap()
518--------------
519
520Pages can, of course, be mapped into multiple VMAs.  Some of these VMAs may
521have VM_LOCKED flag set.  It is possible for a page mapped into one or more
522VM_LOCKED VMAs not to have the PG_mlocked flag set and therefore reside on one
523of the active or inactive LRU lists.  This could happen if, for example, a task
524in the process of munlocking the page could not isolate the page from the LRU.
525As a result, vmscan/shrink_page_list() might encounter such a page as described
526in section "vmscan's handling of unevictable pages".  To handle this situation,
527try_to_unmap() checks for VM_LOCKED VMAs while it is walking a page's reverse
528map.
529
530try_to_unmap() is always called, by either vmscan for reclaim or for page
531migration, with the argument page locked and isolated from the LRU.  Separate
532functions handle anonymous and mapped file and KSM pages, as these types of
533pages have different reverse map lookup mechanisms, with different locking.
534In each case, whether rmap_walk_anon() or rmap_walk_file() or rmap_walk_ksm(),
535it will call try_to_unmap_one() for every VMA which might contain the page.
536
537When trying to reclaim, if try_to_unmap_one() finds the page in a VM_LOCKED
538VMA, it will then mlock the page via mlock_vma_page() instead of unmapping it,
539and return SWAP_MLOCK to indicate that the page is unevictable: and the scan
540stops there.
541
542mlock_vma_page() is called while holding the page table's lock (in addition
543to the page lock, and the rmap lock): to serialize against concurrent mlock or
544munlock or munmap system calls, mm teardown (munlock_vma_pages_all), reclaim,
545holepunching, and truncation of file pages and their anonymous COWed pages.
546
547
548page_mlock() Reverse Map Scan
549---------------------------------
550
551When munlock_vma_page() [see section :ref:`munlock()/munlockall() System Call
552Handling <munlock_munlockall_handling>` above] tries to munlock a
553page, it needs to determine whether or not the page is mapped by any
554VM_LOCKED VMA without actually attempting to unmap all PTEs from the
555page.  For this purpose, the unevictable/mlock infrastructure
556introduced a variant of try_to_unmap() called page_mlock().
557
558page_mlock() walks the respective reverse maps looking for VM_LOCKED VMAs. When
559such a VMA is found the page is mlocked via mlock_vma_page(). This undoes the
560pre-clearing of the page's PG_mlocked done by munlock_vma_page.
561
562Note that page_mlock()'s reverse map walk must visit every VMA in a page's
563reverse map to determine that a page is NOT mapped into any VM_LOCKED VMA.
564However, the scan can terminate when it encounters a VM_LOCKED VMA.
565Although page_mlock() might be called a great many times when munlocking a
566large region or tearing down a large address space that has been mlocked via
567mlockall(), overall this is a fairly rare event.
568
569
570Page Reclaim in shrink_*_list()
571-------------------------------
572
573shrink_active_list() culls any obviously unevictable pages - i.e.
574!page_evictable(page) - diverting these to the unevictable list.
575However, shrink_active_list() only sees unevictable pages that made it onto the
576active/inactive lru lists.  Note that these pages do not have PageUnevictable
577set - otherwise they would be on the unevictable list and shrink_active_list
578would never see them.
579
580Some examples of these unevictable pages on the LRU lists are:
581
582 (1) ramfs pages that have been placed on the LRU lists when first allocated.
583
584 (2) SHM_LOCK'd shared memory pages.  shmctl(SHM_LOCK) does not attempt to
585     allocate or fault in the pages in the shared memory region.  This happens
586     when an application accesses the page the first time after SHM_LOCK'ing
587     the segment.
588
589 (3) mlocked pages that could not be isolated from the LRU and moved to the
590     unevictable list in mlock_vma_page().
591
592shrink_inactive_list() also diverts any unevictable pages that it finds on the
593inactive lists to the appropriate node's unevictable list.
594
595shrink_inactive_list() should only see SHM_LOCK'd pages that became SHM_LOCK'd
596after shrink_active_list() had moved them to the inactive list, or pages mapped
597into VM_LOCKED VMAs that munlock_vma_page() couldn't isolate from the LRU to
598recheck via page_mlock().  shrink_inactive_list() won't notice the latter,
599but will pass on to shrink_page_list().
600
601shrink_page_list() again culls obviously unevictable pages that it could
602encounter for similar reason to shrink_inactive_list().  Pages mapped into
603VM_LOCKED VMAs but without PG_mlocked set will make it all the way to
604try_to_unmap().  shrink_page_list() will divert them to the unevictable list
605when try_to_unmap() returns SWAP_MLOCK, as discussed above.
606