Searched full:pool (Results 1 – 25 of 122) sorted by relevance
12345
| /Documentation/admin-guide/device-mapper/ |
| D | thin-provisioning.rst | 56 Pool device 59 The pool device ties together the metadata volume and the data volume. 68 Setting up a fresh pool device 71 Setting up a pool device requires a valid metadata device, and a 90 Reloading a pool table 93 You may reload a pool's table, indeed this is how the pool is resized 99 Using an existing pool device 104 dmsetup create pool \ 105 --table "0 20971520 thin-pool $metadata_dev $data_dev \ 112 thin-pool is created. People primarily interested in thin provisioning [all …]
|
| /Documentation/netlink/specs/ |
| D | netdev.yaml | 118 name: page-pool 122 doc: Unique ID of a Page Pool instance. 130 ifindex of the netdev to which the pool belongs. 131 May be reported as 0 if the page pool was allocated for a netdev 140 doc: Id of NAPI using this Page Pool instance. 149 Number of outstanding references to this page pool (allocated 151 socket receive queues, driver receive ring, page pool recycling 152 ring, the page pool cache, etc. 162 Seconds in CLOCK_BOOTTIME of when Page Pool was detached by 163 the driver. Once detached Page Pool can no longer be used to [all …]
|
| D | nfsd.yaml | 119 name: pool-mode 208 name: pool-mode-set 209 doc: set the current server pool-mode 210 attribute-set: pool-mode 217 name: pool-mode-get 218 doc: get info about server pool-mode 219 attribute-set: pool-mode
|
| D | devlink.yaml | 12 name: sb-pool-type 250 name: sb-ingress-pool-count 253 name: sb-egress-pool-count 262 name: sb-pool-index 265 name: sb-pool-type 267 enum: sb-pool-type 269 name: sb-pool-size 272 name: sb-pool-threshold-type 571 name: sb-pool-cell-size 1381 name: sb-pool-get [all …]
|
| /Documentation/core-api/ |
| D | swiotlb.rst | 88 block. Hence the default memory pool for swiotlb allocations must be 90 allocations must be physically contiguous, the entire default memory pool is 93 The need to pre-allocate the default swiotlb pool creates a boot-time tradeoff. 94 The pool should be large enough to ensure that bounce buffer requests can 96 for space to become available. But a large pool potentially wastes memory, as 99 I/O. These VMs use a heuristic to set the default pool size to ~6% of memory, 104 default memory pool size remains an open issue. 142 as one or more "pools". The default pool is allocated during system boot with a 143 default size of 64 MiB. The default pool size may be modified with the 147 the life of the system. Each pool must be a contiguous range of physical [all …]
|
| D | genalloc.rst | 18 begins with the creation of a pool using one of: 26 A call to gen_pool_create() will create a pool. The granularity of 31 required to track the memory in the pool. The nid parameter specifies 35 The "managed" interface devm_gen_pool_create() ties the pool to a 37 pool when the given device is destroyed. 39 A pool is shut down with: 45 given pool, this function will take the rather extreme step of invoking 48 A freshly created pool has no memory to allocate. It is fairly useless in 50 to the pool. That can be done with one of: 60 pool, once again using nid as the node ID for ancillary memory allocations. [all …]
|
| D | workqueue.rst | 42 worker pool. An MT wq could provide only one execution context per CPU 53 their own thread pool. 64 * Automatically regulate worker pool and level of concurrency so that 98 Each per-CPU BH worker pool contains only one pseudo worker which represents 110 When a work item is queued to a workqueue, the target worker-pool is 112 and appended on the shared worklist of the worker-pool. For example, 114 be queued on the worklist of either normal or highpri worker-pool that 117 For any thread pool implementation, managing the concurrency level 123 Each worker-pool bound to an actual CPU implements concurrency 124 management by hooking into the scheduler. The worker-pool is notified [all …]
|
| /Documentation/admin-guide/mm/ |
| D | zswap.rst | 10 dynamically allocated RAM-based memory pool. zswap basically trades CPU cycles 27 device when the compressed pool reaches its size limit. This requirement had 42 back into memory all of the pages stored in the compressed pool. The 43 pages stored in zswap will remain in the compressed pool until they are 45 pages out of the compressed pool, a swapoff on the swap device(s) will 47 compressed pool. 53 evict pages from its own compressed pool on an LRU basis and write them back to 54 the backing swap device in the case that the compressed pool is full. 56 Zswap makes use of zpool for the managing the compressed memory pool. Each 59 accessed. The compressed memory pool grows on demand and shrinks as compressed [all …]
|
| D | hugetlbpage.rst | 28 persistent hugetlb pages in the kernel's huge page pool. It also displays 30 and surplus huge pages in the pool of huge pages of default size. 46 is the size of the pool of huge pages. 48 is the number of huge pages in the pool that are not yet 52 which a commitment to allocate from the pool has been made, 55 huge page from the pool of huge pages at fault time. 58 the pool above the value in ``/proc/sys/vm/nr_hugepages``. The 80 pages in the kernel's huge page pool. "Persistent" huge pages will be 81 returned to the huge page pool when freed by a task. A user with root 94 pool, a user with appropriate privilege can use either the mmap system call [all …]
|
| /Documentation/devicetree/bindings/net/ |
| D | marvell-armada-370-neta.txt | 29 - bm,pool-long: ID of a pool, that will accept all packets of a size 30 higher than 'short' pool's threshold (if set) and up to MTU value. 33 - bm,pool-short: ID of a pool, that will be used for accepting 35 will use a single 'long' pool for all packets, as defined above. 48 bm,pool-long = <0>; 49 bm,pool-short = <1>;
|
| D | marvell-neta-bm.txt | 12 - pool<0 : 3>,capacity: size of external buffer pointers' ring maintained 13 in DRAM. Can be set for each pool (id 0 : 3) separately. The value has 17 - pool<0 : 3>,pkt-size: maximum size of a packet accepted by a given buffer 18 pointers' pool (id 0 : 3). It will be taken into consideration only when pool
|
| D | keystone-netcp.txt | 137 - rx-pool: specifies the number of descriptors to be used & the region-id 138 for creating the rx descriptor pool. 139 - tx-pool: specifies the number of descriptors to be used & the region-id 140 for creating the tx descriptor pool. 232 rx-pool = <1024 12>; 233 tx-pool = <1024 12>; 244 rx-pool = <1024 12>; 245 tx-pool = <1024 12>;
|
| /Documentation/admin-guide/cgroup-v1/ |
| D | rdma.rst | 41 resource accounting per cgroup, per device using resource pool structure. 42 Each such resource pool is limited up to 64 resources in given resource pool 45 This resource pool object is linked to the cgroup css. Typically there 46 are 0 to 4 resource pool instances per cgroup, per device in most use cases. 67 Resource pool object is created in following situations. 68 (a) User sets the limit and no previous resource pool exist for the device 75 Resource pool is destroyed if all the resource limits are set to max and 79 the resource pool for a particular device.
|
| /Documentation/devicetree/bindings/i2c/ |
| D | i2c-atr.yaml | 22 i2c-alias-pool: 25 I2C alias pool is a pool of I2C addresses on the main I2C bus that can be 28 remote peripheral is assigned an alias from the pool, and transactions to
|
| /Documentation/filesystems/nfs/ |
| D | knfsd-stats.rst | 31 for each NFS thread pool. 37 pool 38 The id number of the NFS thread pool to which this line applies. 41 Thread pool ids are a contiguous set of small integers starting 42 at zero. The maximum value depends on the thread pool mode, but 44 Note that in the default case there will be a single thread pool 46 and thus this file will have a single line with a pool id of "0". 74 pool for the NFS workload (the workload is thread-limited), in which
|
| /Documentation/networking/ |
| D | page_pool.rst | 4 Page Pool API 28 | Pool empty | Pool has entries 53 purpose of page pool, which is allocate pages fast from cache without locking. 79 allocated from the page pool are already synced for the device. 90 For pages recycled on the XDP xmit and skb paths the page pool will 104 Unless the driver author really understands page pool internals 115 Older drivers expose page pool statistics via ethtool or debugfs. 132 /* Page pool registration */
|
| /Documentation/admin-guide/ |
| D | java.rst | 191 /* From Sun's Java VM Specification, as tag entries in the constant pool. */ 217 long *pool; 247 /* Reads in a value from the constant pool. */ 252 pool[*cur] = ftell(classfile); 314 pool = calloc(cp_count, sizeof(long)); 315 if(!pool) 316 error("%s: Out of memory for constant pool\n", program); 326 if(!pool[this_class] || pool[this_class] == -1) 328 if(fseek(classfile, pool[this_class] + 1, SEEK_SET)) 334 if(!pool[classinfo_ptr] || pool[classinfo_ptr] == -1) [all …]
|
| /Documentation/translations/zh_CN/core-api/ |
| D | workqueue.rst | 577 pool[00] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 0 578 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0 579 pool[02] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 1 580 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1 581 pool[04] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 2 582 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2 583 pool[06] ref= 1 nice= 0 idle/workers= 3/ 3 cpu= 3 584 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3 585 pool[08] ref=42 nice= 0 idle/workers= 6/ 6 cpus=0000000f 586 pool[09] ref=28 nice= 0 idle/workers= 3/ 3 cpus=00000003 [all …]
|
| /Documentation/ABI/testing/ |
| D | sysfs-bus-rbd | 8 Usage: <mon ip addr> <options> <pool name> <rbd image name> [<snap name>] 79 What: /sys/bus/rbd/devices/<dev-id>/pool 97 pool (RO) The name of the storage pool where this rbd 99 within its pool. 117 (RO) The unique identifier for the rbd image's pool. This is a 118 permanent attribute of the pool. A pool's id will never change.
|
| /Documentation/dev-tools/ |
| D | kfence.rst | 63 The KFENCE memory pool is of fixed size, and if the pool is exhausted, no 70 The total memory dedicated to the KFENCE memory pool can be computed as:: 75 dedicating 2 MiB to the KFENCE memory pool. 78 pool is using pages of size ``PAGE_SIZE``. This will result in additional page 251 SLUB) returns a guarded allocation from the KFENCE object pool (allocation 295 If pool utilization reaches 75% (default) or above, to reduce the risk of the 296 pool eventually being fully occupied by allocated objects yet ensure diverse 298 same source from further filling up the pool. The "source" of an allocation is 301 filling up the pool permanently, which is the most common risk for the pool 304 the boot parameter ``kfence.skip_covered_thresh`` (pool usage%).
|
| /Documentation/mm/ |
| D | balance.rst | 25 mapped pages from the direct mapped pool, instead of falling back on 26 the dma pool, so as to keep the dma pool filled for dma requests (atomic 29 regular memory requests by allocating one from the dma pool, instead
|
| /Documentation/devicetree/bindings/media/ |
| D | nuvoton,npcm-vcd.yaml | 46 CMA pool to use for buffers allocation instead of the default CMA pool.
|
| D | allwinner,sun4i-a10-video-engine.yaml | 63 CMA pool to use for buffers allocation instead of the default 64 CMA pool.
|
| /Documentation/devicetree/bindings/sound/ |
| D | google,cros-ec-codec.yaml | 41 Shared memory region to EC. A "shared-dma-pool". 53 compatible = "shared-dma-pool";
|
| /Documentation/devicetree/bindings/soc/ti/ |
| D | keystone-navigator-qmss.txt | 15 queue pool management (allocation, push, pop and notify) and descriptor 16 pool management. 51 - qpend : pool of qpend(interruptible) queues 52 - general-purpose : pool of general queues, primarily used 55 - accumulator : pool of queues on PDSP accumulator channel
|
12345