Lines Matching +full:multi +full:- +full:system
9 multi-processor systems.
24 (multi-queue). On reception, a NIC can send different packets to different
29 generally known as “Receive-side Scaling” (RSS). The goal of RSS and
31 Multi-queue distribution can also be used for traffic prioritization, but
35 and/or transport layer headers-- for example, a 4-tuple hash over
37 implementation of RSS uses a 128-entry indirection table where each entry
45 can be directed to their own receive queue. Such “n-tuple” filters can
46 be configured from ethtool (--config-ntuple).
50 The driver for a multi-queue capable NIC typically provides a kernel
62 commands (--show-rxfh-indir and --set-rxfh-indir). Modifying the
70 signaling path for PCIe devices uses message signaled interrupts (MSI-X),
73 an IRQ may be handled on any CPU. Because a non-negligible part of packet
76 affinity of each interrupt see Documentation/IRQ-affinity.txt. Some systems
85 is to allocate as many queues as there are CPUs in the system (or the
86 NIC maximum, if lower). The most efficient high-rate configuration
92 Per-cpu load can be observed using the mpstat utility, but note that on
96 in the system.
111 introduce inter-processor interrupts (IPIs)).
119 flow hash over the packet’s addresses or ports (2-tuple or 4-tuple hash
125 skb->hash and can be used elsewhere in the stack as a hash of the
145 /sys/class/net/<dev>/queues/rx-<n>/rps_cpus
149 CPU. Documentation/IRQ-affinity.txt explains how CPUs are assigned to
157 the system. At high interrupt rate, it might be wise to exclude the
160 For a multi-queue system, if RSS is configured so that a hardware
169 reordering. The trade-off to sending all packets from the same flow
181 net.core.netdev_max_backlog), the kernel starts a per-flow packet
200 Per-flow rate is calculated by hashing each packet into a hashtable
201 bucket and incrementing a per-bucket counter. The hash function is
203 be much larger than the number of CPUs, flow limit has finer-grained
283 - The current CPU's queue head counter >= the recorded tail counter
285 - The current CPU is unset (>= nr_cpu_ids)
286 - The current CPU is offline
302 The number of entries in the per-queue flow table are set through:
304 /sys/class/net/<dev>/queues/rx-<n>/rps_flow_cnt
317 For a multi-queue device, the rps_flow_cnt for each queue might be
327 Accelerated RFS is to RFS what RSS is to RPS: a hardware-accelerated load
344 is maintained by the NIC driver. This is an auto-generated reverse map of
368 which transmit queue to use when transmitting a packet on a multi-queue
392 busy polling multi-threaded workloads where there are challenges in
399 the same queue-association that a given application is polling on. This
406 CPUs/receive-queues that may use that queue to transmit. The reverse
407 mapping, from CPUs to transmit queues or from receive-queues to transmit
411 for the socket connection for a match in the receive queue-to-transmit queue
413 running CPU as a key into the CPU-to-queue lookup table. If the
426 skb->ooo_okay is set for a packet in the flow. This flag indicates that
437 configured. To enable XPS, the bitmap of CPUs/receive-queues that may
441 /sys/class/net/<dev>/queues/tx-<n>/xps_cpus
443 For selection based on receive-queues map:
444 /sys/class/net/<dev>/queues/tx-<n>/xps_rxqs
449 has no effect, since there is no choice in this case. In a multi-queue
450 system, XPS is preferably configured so that each CPU maps onto one queue.
451 If there are as many queues as there are CPUs in the system, then each
459 explicitly configured mapping receive-queue(s) to transmit queue(s). If the
460 user configuration for receive-queue map does not apply, then the transmit
466 These are rate-limitation mechanisms implemented by HW, where currently
467 a max-rate attribute is supported, by setting a Mbps value to
469 /sys/class/net/<dev>/queues/tx-<n>/tx_maxrate