Lines Matching full:napi
6 NAPI title
9 NAPI is the event handling mechanism used by the Linux networking stack.
10 The name NAPI no longer stands for anything in particular [#]_.
14 The host then schedules a NAPI instance to process the events.
15 The device may also be polled for events via NAPI without receiving
18 NAPI processing usually happens in the software interrupt context,
20 for NAPI processing.
22 All in all NAPI abstracts away from the drivers the context and configuration
28 The two most important elements of NAPI are the struct napi_struct
30 of the NAPI instance while the method is the driver-specific event
39 netif_napi_add() and netif_napi_del() add/remove a NAPI instance
45 A disabled NAPI can't be scheduled and its poll method is guaranteed
46 to not be invoked. napi_disable() waits for ownership of the NAPI
57 napi_schedule() is the basic method of scheduling a NAPI poll.
60 will take ownership of the NAPI instance.
62 Later, after NAPI is scheduled, the driver's poll method will be
82 the NAPI instance will be serviced/polled again (without the
121 the NAPI instance - until NAPI polling finishes any further
130 if (napi_schedule_prep(&v->napi)) {
133 __napi_schedule(&v->napi);
140 if (budget && napi_complete_done(&v->napi, work_done)) {
153 Modern devices have multiple NAPI instances (struct napi_struct) per
155 mapped to queues and interrupts. NAPI is primarily a polling/processing
157 devices end up using NAPI in fairly similar ways.
159 NAPI instances most often correspond 1:1:1 to interrupts and queue pairs
162 In less common cases a NAPI instance may be used for multiple queues
163 or Rx and Tx queues can be serviced by separate NAPI instances on a single
165 a 1:1 mapping between NAPI instances and interrupts.
170 a channel as an IRQ/NAPI which services queues of a given type. For example,
177 User interactions with NAPI depend on NAPI instance ID. The instance IDs
184 NAPI does not perform any explicit event coalescing by default.
188 NAPI can be configured to arm a repoll timer instead of unmasking
193 before NAPI gives up and goes back to using hardware IRQs.
202 off CPU cycles for lower latency (production uses of NAPI busy polling
207 ``net.core.busy_read`` sysctls. An io_uring API for NAPI busy polling
227 The NAPI budget for busy polling is lower than the default (which makes
234 Threaded NAPI
237 Threaded NAPI is an operating mode that uses dedicated kernel
238 threads rather than software IRQ context for NAPI processing.
240 NAPI instances of that device. Each NAPI instance will spawn a separate
241 thread (called ``napi/${ifc-name}-${napi-id}``).
245 between IRQs and NAPI instances may not be trivial (and is driver
246 dependent). The NAPI instance IDs will be assigned in the opposite
249 Threaded NAPI is controlled by writing 0/1 to the ``threaded`` file in
254 .. [#] NAPI was originally referred to as New API in 2.4 Linux. argument