• Home
  • Raw
  • Download

Lines Matching +full:attribute +full:- +full:sets

15 256M and ppc64 supports 4K and 16M.  A TLB is a cache of virtual-to-physical
87 Once a number of huge pages have been pre-allocated to the kernel huge page
150 indicates the current number of pre-allocated huge pages of the default size.
161 task that modifies ``nr_hugepages``. The default for the allowed nodes--when the
162 task has default memory policy--is all on-line nodes with memory. Allowed
187 requested by applications. Writing any non-zero value into this file
207 of the in-use huge pages to surplus huge pages. This will occur even if
209 this condition holds--that is, until ``nr_hugepages+nr_overcommit_hugepages`` is
210 increased sufficiently, or the surplus huge pages go out of use and are freed--
213 With support for multiple huge page pools at run-time available, much of
224 hugepages-${size}kB
235 which function as described above for the default huge page-sized case.
243 the ``/sysfs`` interface using the ``nr_hugepages_mempolicy`` attribute, the
246 sysctl or attribute. When the ``nr_hugepages`` attribute is used, mempolicy
252 numactl --interleave <node-list> echo 20 \
257 numactl -m <node-list> echo 20 >/proc/sys/vm/nr_hugepages_mempolicy
259 This will allocate or free ``abs(20 - nr_hugepages)`` to or from the nodes
260 specified in <node-list>, depending on whether number of persistent huge pages
262 allocated nor freed on any node not included in the specified <node-list>.
265 memory policy mode--bind, preferred, local or interleave--may be used. The
269 :ref:`Documentation/admin-guide/mm/numa_memory_policy.rst <numa_memory_policy>`],
289 #. The nodes allowed mask will be derived from any non-default task mempolicy,
292 shell with non-default policy, that policy will be used. One can specify a
293 node list of "all" with numactl --interleave or --membind [-m] to achieve
296 #. Any task mempolicy specified--e.g., using numactl--will be constrained by
298 be no way for a task with non-default policy running in a cpuset with a
302 #. Boot-time huge page allocation attempts to distribute the requested number
303 of huge pages over all on-lines nodes with memory.
312 /sys/devices/system/node/node[0-9]*/hugepages/
315 contains the following attribute files::
321 The free\_' and surplus\_' attribute files are read-only. They return the number
325 The ``nr_hugepages`` attribute returns the total number of huge pages on the
326 specified node. When this attribute is written, the number of persistent huge
343 mount -t hugetlbfs \
344 -o uid=<value>,gid=<value>,mode=<value>,pagesize=<value>,size=<value>,\
350 The ``uid`` and ``gid`` options sets the owner and group of the root of the
354 The ``mode`` option sets the mode of root of file system to value & 01777.
362 The ``size`` option sets the maximum value of memory (huge pages) allowed
367 The ``min_size`` option sets the minimum value of memory (huge pages) allowed
377 The option ``nr_inodes`` sets the maximum number of inodes that ``/mnt/huge``
419 ``hugepage-shm``
420 see tools/testing/selftests/vm/hugepage-shm.c
422 ``hugepage-mmap``
423 see tools/testing/selftests/vm/hugepage-mmap.c