Searched full:queue (Results 1 – 25 of 268) sorted by relevance
1234567891011
| /Documentation/ABI/testing/ |
| D | sysfs-class-net-queues | 1 What: /sys/class/<iface>/queues/rx-<queue>/rps_cpus 8 network device queue. Possible values depend on the number 11 What: /sys/class/<iface>/queues/rx-<queue>/rps_flow_cnt 17 processed by this particular network device receive queue. 19 What: /sys/class/<iface>/queues/tx-<queue>/tx_timeout 25 network interface transmit queue. 27 What: /sys/class/<iface>/queues/tx-<queue>/tx_maxrate 32 A Mbps max-rate set for the queue, a value of zero means disabled, 35 What: /sys/class/<iface>/queues/tx-<queue>/xps_cpus 42 network device transmit queue. Possible vaules depend on the [all …]
|
| D | sysfs-block | 101 What: /sys/block/<disk>/queue/logical_block_size 108 What: /sys/block/<disk>/queue/physical_block_size 120 What: /sys/block/<disk>/queue/minimum_io_size 134 What: /sys/block/<disk>/queue/optimal_io_size 147 What: /sys/block/<disk>/queue/nomerges 183 What: /sys/block/<disk>/queue/discard_granularity 196 What: /sys/block/<disk>/queue/discard_max_bytes 212 What: /sys/block/<disk>/queue/discard_zeroes_data 219 What: /sys/block/<disk>/queue/write_same_max_bytes 232 What: /sys/block/<disk>/queue/write_zeroes_max_bytes [all …]
|
| /Documentation/networking/ |
| D | scaling.rst | 28 (multi-queue). On reception, a NIC can send different packets to different 32 queue, which in turn can be processed by separate CPUs. This mechanism is 35 Multi-queue distribution can also be used for traffic prioritization, but 42 stores a queue number. The receive queue for a packet is determined 49 can be directed to their own receive queue. Such “n-tuple” filters can 56 The driver for a multi-queue capable NIC typically provides a kernel 59 num_queues. A typical RSS configuration would be to have one receive queue 64 The indirection table of an RSS device, which resolves a queue by masked 76 Each receive queue has a separate IRQ associated with it. The NIC triggers 77 this to notify a CPU when new packets arrive on the given queue. The [all …]
|
| D | multiqueue.txt | 23 netif_{start|stop|wake}_subqueue() functions to manage each queue while the 33 default pfifo_fast qdisc. This qdisc supports one qdisc per hardware queue. 37 the base driver to determine which queue to send the skb to. 40 blocking. It will cycle though the bands and verify that the hardware queue 45 will be queued to the band associated with the hardware queue. 61 band 0 => queue 0 62 band 1 => queue 1 63 band 2 => queue 2 64 band 3 => queue 3 66 Traffic will begin flowing through each queue based on either the simple_tx_hash [all …]
|
| D | af_xdp.rst | 54 specific queue id on that device, and it is not until bind is 105 queue id of that netdev. It is created and configured (chunk size, 107 system call. A UMEM is bound to a netdev and queue id, via the bind() 129 with the UMEM must have an RX queue, TX queue or both. Say, that there 213 not match the queue configuration and netdev, the frame will be 215 queue 17. Only the XDP program executing for eth0 and queue 17 will 299 up in queue 16, that we will enable AF_XDP on. Here, we use ethtool 320 allocates one Rx and Tx queue pair per core. So on a 8 core system, 321 queue ids 0 to 7 will be allocated, one per core. In the AF_XDP 323 specify a specific queue id to bind to and it is only the traffic [all …]
|
| D | hinic.txt | 14 HiNIC devices support MSI-X interrupt vector for each Tx/Rx queue and 70 the Queue Pairs. The WQ is a Memory Block in a Page. The Block contains 71 pointers to Memory Areas that are the Memory for the Work Queue Elements(WQEs). 79 Queue Pairs(QPs) - The HW Receive and Send queues for Receiving and Transmitting 102 The Logical Tx queue is not dependent on the format of the HW Send Queue. 106 The Logical Rx queue is not dependent on the format of the HW Receive Queue.
|
| /Documentation/devicetree/bindings/soc/ti/ |
| D | keystone-navigator-qmss.txt | 1 * Texas Instruments Keystone Navigator Queue Management SubSystem driver 3 The QMSS (Queue Manager Sub System) found on Keystone SOCs is one of 5 multi-core Navigator. QMSS consist of queue managers, packed-data structure 8 The Queue Manager is a hardware module that is responsible for accelerating 15 queue pool management (allocation, push, pop and notify) and descriptor 23 - queue-range : <start number> total range of queue numbers for the device. 29 - qmgrs : child node describing the individual queue managers on the 32 -- managed-queues : the actual queues managed by each queue manager 33 instance, specified as <"base queue #" "# of queues">. 37 - Queue Peek region. [all …]
|
| /Documentation/devicetree/bindings/misc/ |
| D | intel,ixp4xx-ahb-queue-manager.yaml | 5 $id: "http://devicetree.org/schemas/misc/intel,ixp4xx-ahb-queue-manager.yaml#" 8 title: Intel IXP4xx AHB Queue Manager 14 The IXP4xx AHB Queue Manager maintains queues as circular buffers in 18 queues from the queue manager with foo-queue = <&qmgr N> where the 19 &qmgr is a phandle to the queue manager and N is the queue resource 20 number. The queue resources available and their specific purpose 26 - const: intel,ixp4xx-ahb-queue-manager 45 qmgr: queue-manager@60000000 { 46 compatible = "intel,ixp4xx-ahb-queue-manager";
|
| /Documentation/scsi/ |
| D | hptiop.txt | 22 0x40 Inbound Queue Port 23 0x44 Outbound Queue Port 37 0x40 Inbound Queue Port 38 0x44 Outbound Queue Port 49 0x0 Inbound Queue Head Pointer 50 0x4 Inbound Queue Tail Pointer 51 0x8 Outbound Queue Head Pointer 52 0xC Outbound Queue Tail Pointer 55 0x40-0x1040 Inbound Queue 56 0x1040-0x2040 Outbound Queue [all …]
|
| /Documentation/block/ |
| D | switching-sched.rst | 5 Each io queue has a set of io scheduler tunables associated with it. These 9 /sys/block/<device>/queue/iosched 22 echo SCHEDNAME > /sys/block/DEV/queue/scheduler 28 a "cat /sys/block/DEV/queue/scheduler" - the list of valid names 31 # cat /sys/block/sda/queue/scheduler 33 # echo none >/sys/block/sda/queue/scheduler 34 # cat /sys/block/sda/queue/scheduler
|
| D | null_blk.rst | 13 the request queue. The following instances are possible: 15 Multi-queue block-layer 25 All of them have a completion queue for each core in the system. 30 queue_mode=[0-2]: Default: 2-Multi-queue 35 1 Single-queue (deprecated) 36 2 Multi-queue 69 defaults to 1. For multi-queue, it is ignored when use_per_node_hctx module 73 The hardware queue depth of the device. 75 Multi-queue specific parameters 84 1 The multi-queue block layer is instantiated with a hardware dispatch [all …]
|
| D | bfq-iosched.rst | 58 BFQ works for multi-queue devices too. 162 `(bfq_)queue`. 164 - BFQ grants exclusive access to the device, for a while, to one queue 166 associating every queue with a budget, measured in number of 169 - After a queue is granted access to the device, the budget of the 170 queue is decremented, on each request dispatch, by the size of the 173 - The in-service queue is expired, i.e., its service is suspended, 174 only if one of the following events occurs: 1) the queue finishes 175 its budget, 2) the queue empties, 3) a "budget timeout" fires. 181 - Actually, as in CFQ, a queue associated with a process issuing [all …]
|
| /Documentation/admin-guide/device-mapper/ |
| D | dm-queue-length.rst | 2 dm-queue-length 5 dm-queue-length is a path selector module for device-mapper targets, 7 The path selector name is 'queue-length'. 30 dm-queue-length increments/decrements 'in-flight' when an I/O is 32 dm-queue-length selects a path with the minimum 'in-flight'. 41 # echo "0 10 multipath 0 0 1 1 queue-length 0 2 1 8:0 128 8:16 128" \ 45 test: 0 10 multipath 0 0 1 1 queue-length 0 2 1 8:0 128 8:16 128
|
| /Documentation/arm/keystone/ |
| D | knav-qmss.rst | 2 Texas Instruments Keystone Navigator Queue Management SubSystem driver 9 The QMSS (Queue Manager Sub System) found on Keystone SOCs is one of 11 multi-core Navigator. QMSS consist of queue managers, packed-data structure 14 The Queue Manager is a hardware module that is responsible for accelerating 21 queue pool management (allocation, push, pop and notify) and descriptor 34 queue or multiple contiguous queues. drivers/soc/ti/knav_qmss_acc.c is the 57 file system. The driver doesn't acc queues to the supported queue range if 58 PDSP is not running in the SoC. The API call fails if there is a queue open 59 request to an acc queue and PDSP is not running. So make sure to copy firmware 60 to file system before using these queue types.
|
| /Documentation/media/uapi/v4l/ |
| D | dev-decoder.rst | 42 queue containing data that resulted from processing buffer A. 50 the destination buffer queue; for decoders, the queue of buffers containing 51 decoded frames; for encoders, the queue of buffers containing an encoded 110 the source buffer queue; for decoders, the queue of buffers containing 111 an encoded bytestream; for encoders, the queue of buffers containing raw 318 Alternatively, :c:func:`VIDIOC_CREATE_BUFS` on the ``OUTPUT`` queue can be 346 3. Start streaming on the ``OUTPUT`` queue via :c:func:`VIDIOC_STREAMON`. 350 ``OUTPUT`` queue via :c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`. The 352 required metadata to configure the ``CAPTURE`` queue are found. This is 367 queue will not return the real values for the stream until a [all …]
|
| /Documentation/networking/device_drivers/amazon/ |
| D | ena.txt | 11 through an Admin Queue. 21 processing by providing multiple Tx/Rx queue pairs (the maximum number 22 is advertised by the device via the Admin Queue), a dedicated MSI-X 23 interrupt vector per Tx/Rx queue pair, adaptive interrupt moderation, 36 Queue (LLQ), which saves several more microseconds. 66 - Admin Queue (AQ) and Admin Completion Queue (ACQ) 67 - Asynchronous Event Notification Queue (AENQ) 80 The following admin queue commands are supported: 81 - Create I/O submission queue 82 - Create I/O completion queue [all …]
|
| /Documentation/devicetree/bindings/powerpc/fsl/ |
| D | raideng.txt | 30 There must be a sub-node for each job queue present in RAID Engine 33 - compatible: Should contain "fsl,raideng-v1.0-job-queue" as the value 34 This identifies the job queue interface 35 - reg: offset and length of the register set for job queue 42 compatible = "fsl,raideng-v1.0-job-queue"; 49 This node must be a sub-node of job queue node 70 compatible = "fsl,raideng-v1.0-job-queue";
|
| /Documentation/networking/device_drivers/aquantia/ |
| D | atlantic.txt | 146 Queue[0] InPackets: 23567131 147 Queue[0] OutPackets: 20070028 148 Queue[0] InJumboPackets: 0 149 Queue[0] InLroPackets: 0 150 Queue[0] InErrors: 0 151 Queue[1] InPackets: 45428967 152 Queue[1] OutPackets: 11306178 153 Queue[1] InJumboPackets: 0 154 Queue[1] InLroPackets: 0 155 Queue[1] InErrors: 0 [all …]
|
| /Documentation/devicetree/bindings/mailbox/ |
| D | ti,message-manager.txt | 15 - reg-names queue_proxy_region - Map the queue proxy region. 16 queue_state_debug_region - Map the queue state debug 19 - #mbox-cells Shall be 2. Contains the queue ID and proxy ID in that 45 # RX queue ID is 5, proxy ID is 2 46 # TX queue ID is 0, proxy ID is 0
|
| /Documentation/devicetree/bindings/net/ |
| D | keystone-netcp.txt | 93 - tx-queue: the navigator queue number associated with the tx dma channel. 136 - rx-queue: the navigator queue number associated with rx dma channel. 141 - rx-queue-depth: number of descriptors in each of the free descriptor 142 queue (FDQ) for the pktdma Rx flow. There can be at 145 - tx-completion-queue: the navigator queue number where the descriptors are 185 tx-queue = <648>; 234 rx-queue-depth = <128 128 0 0>; 236 rx-queue = <8704>; 237 tx-completion-queue = <8706>; 246 rx-queue-depth = <128 128 0 0>; [all …]
|
| /Documentation/devicetree/bindings/scsi/ |
| D | hisilicon-sas.txt | 19 - queue-count : number of delivery and completion queues in the controller 24 - Completion queue interrupts 32 Completion queue interrupts : each completion queue has 1 42 - Completion queue interrupts 49 Completion queue interrupts : each completion queue has 1 73 queue-count = <32>;
|
| /Documentation/filesystems/ |
| D | inotify.txt | 37 item to block on, which is mapped to a single queue of events. The single 43 which happened first. A single queue trivially gives you ordering. Such 49 queue is the data structure that makes sense. 64 juggle more than one queue and thus more than one associated fd. There 65 need not be a one-fd-per-process mapping; it is one-fd-per-queue and a 66 process can easily want more than one queue.
|
| /Documentation/devicetree/bindings/input/touchscreen/ |
| D | fsl-mx25-tcq.txt | 1 Freescale mx25 TS conversion queue module 3 mx25 touchscreen conversion queue module which controls the ADC unit of the 25 The first queue is for the touchscreen, the second for general purpose ADC.
|
| /Documentation/devicetree/bindings/dma/ |
| D | fsl-qdma.txt | 26 - status-sizes: status queue size of per virtual block 27 - queue-sizes: command queue size of per virtual block, the size number 54 queue-sizes = <64 64>;
|
| /Documentation/virt/kvm/devices/ |
| D | xive.txt | 118 Configures an event queue of a CPU 133 - flags: queue flags 137 - qshift: queue size (power of 2) 138 - qaddr: real address of queue 139 - qtoggle: current queue toggle bit 140 - qindex: current queue index 146 -EINVAL: Invalid queue size 147 -EINVAL: Invalid queue address
|
1234567891011