Searched full:queue (Results 1 – 25 of 352) sorted by relevance
12345678910>>...15
| /Documentation/devicetree/bindings/net/ |
| D | intel,ixp4xx-hss.yaml | 15 Processing Engine) and the IXP4xx Queue Manager to process 35 intel,queue-chl-rxtrig: 39 - description: phandle to the RX trigger queue on the NPE 40 - description: the queue instance number 41 description: phandle to the RX trigger queue on the NPE 43 intel,queue-chl-txready: 47 - description: phandle to the TX ready queue on the NPE 48 - description: the queue instance number 49 description: phandle to the TX ready queue on the NPE 51 intel,queue-pkt-rx: [all …]
|
| D | intel,ixp4xx-ethernet.yaml | 18 Processing Engine) and the IXP4xx Queue Manager to process 30 queue-rx: 34 - description: phandle to the RX queue node 35 - description: RX queue instance to use 36 description: phandle to the RX queue on the NPE 38 queue-txready: 42 - description: phandle to the TX READY queue node 43 - description: TX READY queue instance to use 44 description: phandle to the TX READY queue on the NPE 67 - queue-rx [all …]
|
| /Documentation/ABI/testing/ |
| D | sysfs-class-net-queues | 1 What: /sys/class/net/<iface>/queues/rx-<queue>/rps_cpus 8 network device queue. Possible values depend on the number 11 What: /sys/class/net/<iface>/queues/rx-<queue>/rps_flow_cnt 17 processed by this particular network device receive queue. 19 What: /sys/class/net/<iface>/queues/tx-<queue>/tx_timeout 25 network interface transmit queue. 27 What: /sys/class/net/<iface>/queues/tx-<queue>/tx_maxrate 32 A Mbps max-rate set for the queue, a value of zero means disabled, 35 What: /sys/class/net/<iface>/queues/tx-<queue>/xps_cpus 42 network device transmit queue. Possible values depend on the [all …]
|
| /Documentation/virt/gunyah/ |
| D | message-queue.rst | 5 Message queue is a simple low-capacity IPC channel between two virtual machines. 7 message queue is unidirectional and buffered in the hypervisor. A full-duplex 10 The size of the queue and the maximum size of the message that can be passed is 11 fixed at creation of the message queue. Resource manager is presently the only 14 further protocol on top of the message queue messages themselves. For instance, 18 The diagram below shows how message queue works. A typical configuration 19 involves 2 message queues. Message queue 1 allows VM_A to send messages to VM_B. 20 Message queue 2 allows VM_B to send messages to VM_A. 24 message queue 1's queue. The hypervisor copies memory into the internal 25 message queue buffer; the memory doesn't need to be shared between [all …]
|
| /Documentation/devicetree/bindings/soc/ti/ |
| D | keystone-navigator-qmss.txt | 1 * Texas Instruments Keystone Navigator Queue Management SubSystem driver 3 The QMSS (Queue Manager Sub System) found on Keystone SOCs is one of 5 multi-core Navigator. QMSS consist of queue managers, packed-data structure 8 The Queue Manager is a hardware module that is responsible for accelerating 15 queue pool management (allocation, push, pop and notify) and descriptor 23 - queue-range : <start number> total range of queue numbers for the device. 29 - qmgrs : child node describing the individual queue managers on the 32 -- managed-queues : the actual queues managed by each queue manager 33 instance, specified as <"base queue #" "# of queues">. 37 - Queue Peek region. [all …]
|
| /Documentation/devicetree/bindings/crypto/ |
| D | intel,ixp4xx-crypto.yaml | 32 queue-rx: 36 - description: phandle to the RX queue on the NPE 37 - description: the queue instance number 38 description: phandle to the RX queue on the NPE, the cell describing 39 the queue instance to be used. 41 queue-txready: 45 - description: phandle to the TX READY queue on the NPE 46 - description: the queue instance number 47 description: phandle to the TX READY queue on the NPE, the cell describing 48 the queue instance to be used. [all …]
|
| /Documentation/networking/ |
| D | scaling.rst | 28 (multi-queue). On reception, a NIC can send different packets to different 32 queue, which in turn can be processed by separate CPUs. This mechanism is 35 Multi-queue distribution can also be used for traffic prioritization, but 42 stores a queue number. The receive queue for a packet is determined 51 both directions of the flow to land on the same Rx queue (and CPU). The 64 can be directed to their own receive queue. Such “n-tuple” filters can 71 The driver for a multi-queue capable NIC typically provides a kernel 74 num_queues. A typical RSS configuration would be to have one receive queue 79 The indirection table of an RSS device, which resolves a queue by masked 91 Each receive queue has a separate IRQ associated with it. The NIC triggers [all …]
|
| D | tc-queue-filters.rst | 4 TC queue based filtering 8 to a single queue on both the transmit and receive side. 15 the queue-sets are configured using mqprio. 17 2) TC filter directs traffic to a transmit queue with the action 19 for transmit queue is executed in software only and cannot be 23 queues and/or a single queue are supported as below: 31 receive queue. The action skbedit queue_mapping for receive queue 33 the hardware for queue selection. In such case, the hardware 35 devices, TC filter directing traffic to a queue have higher 36 priority over flow director filter assigning a queue. The hash
|
| D | multiqueue.rst | 23 netif_{start|stop|wake}_subqueue() functions to manage each queue while the 32 default pfifo_fast qdisc. This qdisc supports one qdisc per hardware queue. 36 the base driver to determine which queue to send the skb to. 39 blocking. It will cycle though the bands and verify that the hardware queue 44 will be queued to the band associated with the hardware queue. 60 band 0 => queue 0 61 band 1 => queue 1 62 band 2 => queue 2 63 band 3 => queue 3 65 Traffic will begin flowing through each queue based on either the simple_tx_hash [all …]
|
| /Documentation/devicetree/bindings/misc/ |
| D | intel,ixp4xx-ahb-queue-manager.yaml | 5 $id: http://devicetree.org/schemas/misc/intel,ixp4xx-ahb-queue-manager.yaml# 8 title: Intel IXP4xx AHB Queue Manager 14 The IXP4xx AHB Queue Manager maintains queues as circular buffers in 18 queues from the queue manager with foo-queue = <&qmgr N> where the 19 &qmgr is a phandle to the queue manager and N is the queue resource 20 number. The queue resources available and their specific purpose 26 - const: intel,ixp4xx-ahb-queue-manager 47 qmgr: queue-manager@60000000 { 48 compatible = "intel,ixp4xx-ahb-queue-manager";
|
| /Documentation/block/ |
| D | blk-mq.rst | 4 Multi-Queue Block IO Queueing Mechanism (blk-mq) 7 The Multi-Queue Block IO Queueing Mechanism is an API to enable fast storage 30 in those devices' design, the multi-queue mechanism was introduced. 32 The former design had a single queue to store block IO requests with a single 51 path possible: send it directly to the hardware queue. However, there are two 54 sent to the software queue. 57 at the hardware queue, a second stage queue where the hardware has direct access 60 queue, to be sent in the future, when the hardware is able. 70 be used to communicate with the device driver. Each queue has its own lock and 73 The staging queue can be used to merge requests for adjacent sectors. For [all …]
|
| D | switching-sched.rst | 5 Each io queue has a set of io scheduler tunables associated with it. These 9 /sys/block/<device>/queue/iosched 22 echo SCHEDNAME > /sys/block/DEV/queue/scheduler 28 a "cat /sys/block/DEV/queue/scheduler" - the list of valid names 31 # cat /sys/block/sda/queue/scheduler 33 # echo none >/sys/block/sda/queue/scheduler 34 # cat /sys/block/sda/queue/scheduler
|
| /Documentation/netlink/specs/ |
| D | netdev.yaml | 74 name: queue-type 80 entries: [ queue ] 252 name: queue 256 doc: Queue index; most queue types are indexed like a C array, with 257 indexes starting at 0 and ending at queue count - 1. Queue indexes 258 are scoped to an interface and queue type. 262 doc: ifindex of the netdevice to which the queue belongs. 268 doc: Queue type as rx, tx. Each queue type defines a separate ID space. 270 enum: queue-type 273 doc: ID of the NAPI instance which services this queue. [all …]
|
| /Documentation/admin-guide/device-mapper/ |
| D | dm-queue-length.rst | 2 dm-queue-length 5 dm-queue-length is a path selector module for device-mapper targets, 7 The path selector name is 'queue-length'. 30 dm-queue-length increments/decrements 'in-flight' when an I/O is 32 dm-queue-length selects a path with the minimum 'in-flight'. 41 # echo "0 10 multipath 0 0 1 1 queue-length 0 2 1 8:0 128 8:16 128" \ 45 test: 0 10 multipath 0 0 1 1 queue-length 0 2 1 8:0 128 8:16 128
|
| /Documentation/arch/arm/keystone/ |
| D | knav-qmss.rst | 2 Texas Instruments Keystone Navigator Queue Management SubSystem driver 9 The QMSS (Queue Manager Sub System) found on Keystone SOCs is one of 11 multi-core Navigator. QMSS consist of queue managers, packed-data structure 14 The Queue Manager is a hardware module that is responsible for accelerating 21 queue pool management (allocation, push, pop and notify) and descriptor 34 queue or multiple contiguous queues. drivers/soc/ti/knav_qmss_acc.c is the 57 file system. The driver doesn't acc queues to the supported queue range if 58 PDSP is not running in the SoC. The API call fails if there is a queue open 59 request to an acc queue and PDSP is not running. So make sure to copy firmware 60 to file system before using these queue types.
|
| /Documentation/ABI/stable/ |
| D | sysfs-block | 36 limited by some other queue limits, such as max_segments. 96 than the number of requests queued in the block device queue. 102 This is related to /sys/block/<disk>/queue/nr_requests 200 What: /sys/block/<disk>/queue/add_random 208 What: /sys/block/<disk>/queue/chunk_sectors 221 What: /sys/block/<disk>/queue/crypto/ 225 The presence of this subdirectory of /sys/block/<disk>/queue/ 232 What: /sys/block/<disk>/queue/crypto/max_dun_bits 240 What: /sys/block/<disk>/queue/crypto/modes/<mode> 258 /sys/block/<disk>/queue/crypto/modes/AES-256-XTS will exist and [all …]
|
| /Documentation/userspace-api/media/v4l/ |
| D | dev-decoder.rst | 42 queue containing data that resulted from processing buffer A. 50 the destination buffer queue; for decoders, the queue of buffers containing 51 decoded frames; for encoders, the queue of buffers containing an encoded 117 the source buffer queue; for decoders, the queue of buffers containing 118 an encoded bytestream; for encoders, the queue of buffers containing raw 325 Alternatively, :c:func:`VIDIOC_CREATE_BUFS` on the ``OUTPUT`` queue can be 353 3. Start streaming on the ``OUTPUT`` queue via :c:func:`VIDIOC_STREAMON`. 357 ``OUTPUT`` queue via :c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`. The 359 required metadata to configure the ``CAPTURE`` queue are found. This is 374 queue will not return the real values for the stream until a [all …]
|
| D | dev-stateless-decoder.rst | 26 ``OUTPUT`` queue when :c:func:`VIDIOC_REQBUFS` or :c:func:`VIDIOC_CREATE_BUFS` 33 ``OUTPUT`` queue. 39 calls :c:func:`VIDIOC_ENUM_FMT` on the ``OUTPUT`` queue. 42 irrespective of the format currently set on the ``CAPTURE`` queue. 49 :c:func:`VIDIOC_ENUM_FMT` on the ``CAPTURE`` queue. 52 active on the ``OUTPUT`` queue. 57 before querying the ``CAPTURE`` queue. Failure to do so will result in the 72 1. Set the coded format on the ``OUTPUT`` queue via :c:func:`VIDIOC_S_FMT`. 99 3. Call :c:func:`VIDIOC_G_FMT` for ``CAPTURE`` queue to get the format for the 130 the ``CAPTURE`` queue. The client may use this ioctl to discover which [all …]
|
| D | dev-encoder.rst | 42 queue containing data that resulted from processing buffer A. 146 1. Set the coded format on the ``CAPTURE`` queue via :c:func:`VIDIOC_S_FMT`. 196 the ``CAPTURE`` queue. 201 3. Set the raw source format on the ``OUTPUT`` queue via 232 4. Set the raw frame interval on the ``OUTPUT`` queue via 234 ``CAPTURE`` queue to the same value. 259 ``OUTPUT`` queue is just a hint, the application may provide raw 278 queue. Ideally these would be independent settings, but that would 281 5. **Optional** Set the coded frame interval on the ``CAPTURE`` queue via 308 ``CAPTURE`` queue, that depends on how fast the encoder is and how [all …]
|
| /Documentation/devicetree/bindings/powerpc/fsl/ |
| D | raideng.txt | 30 There must be a sub-node for each job queue present in RAID Engine 33 - compatible: Should contain "fsl,raideng-v1.0-job-queue" as the value 34 This identifies the job queue interface 35 - reg: offset and length of the register set for job queue 42 compatible = "fsl,raideng-v1.0-job-queue"; 49 This node must be a sub-node of job queue node 70 compatible = "fsl,raideng-v1.0-job-queue";
|
| /Documentation/devicetree/bindings/firmware/ |
| D | intel,ixp4xx-network-processing-engine.yaml | 74 intel,queue-chl-rxtrig = <&qmgr 12>; 75 intel,queue-chl-txready = <&qmgr 34>; 76 intel,queue-pkt-rx = <&qmgr 13>; 77 intel,queue-pkt-tx = <&qmgr 14>, <&qmgr 15>, <&qmgr 16>, <&qmgr 17>; 78 intel,queue-pkt-rxfree = <&qmgr 18>, <&qmgr 19>, <&qmgr 20>, <&qmgr 21>; 79 intel,queue-pkt-txdone = <&qmgr 22>; 90 queue-rx = <&qmgr 30>; 91 queue-txready = <&qmgr 29>;
|
| /Documentation/bpf/ |
| D | map_queue_stack.rst | 38 An element ``value`` can be added to a queue or stack using the 41 when the queue or stack is full, the oldest element will be removed to 52 This helper fetches an element ``value`` from a queue or stack without 63 This helper removes an element into ``value`` from a queue or 77 A userspace program can push ``value`` onto a queue or stack using libbpf's 90 A userspace program can peek at the ``value`` at the head of a queue or stack 102 A userspace program can pop a ``value`` from the head of a queue or stack using 113 This snippet shows how to declare a queue in a BPF program: 121 } queue SEC(".maps"); 127 This snippet shows how to use libbpf's low-level API to create a queue from
|
| /Documentation/scsi/ |
| D | hptiop.rst | 32 0x40 Inbound Queue Port 33 0x44 Outbound Queue Port 50 0x40 Inbound Queue Port 51 0x44 Outbound Queue Port 68 0x0 Inbound Queue Head Pointer 69 0x4 Inbound Queue Tail Pointer 70 0x8 Outbound Queue Head Pointer 71 0xC Outbound Queue Tail Pointer 74 0x40-0x1040 Inbound Queue 75 0x1040-0x2040 Outbound Queue [all …]
|
| /Documentation/networking/device_drivers/ethernet/amazon/ |
| D | ena.rst | 15 through an Admin Queue. 25 processing by providing multiple Tx/Rx queue pairs (the maximum number 26 is advertised by the device via the Admin Queue), a dedicated MSI-X 27 interrupt vector per Tx/Rx queue pair, adaptive interrupt moderation, 40 Queue (LLQ), which saves several more microseconds. 68 - Admin Queue (AQ) and Admin Completion Queue (ACQ) 69 - Asynchronous Event Notification Queue (AENQ) 82 The following admin queue commands are supported: 84 - Create I/O submission queue 85 - Create I/O completion queue [all …]
|
| /Documentation/networking/device_drivers/ethernet/aquantia/ |
| D | atlantic.rst | 169 Queue[0] InPackets: 23567131 170 Queue[0] OutPackets: 20070028 171 Queue[0] InJumboPackets: 0 172 Queue[0] InLroPackets: 0 173 Queue[0] InErrors: 0 174 Queue[1] InPackets: 45428967 175 Queue[1] OutPackets: 11306178 176 Queue[1] InJumboPackets: 0 177 Queue[1] InLroPackets: 0 178 Queue[1] InErrors: 0 [all …]
|
12345678910>>...15