Lines Matching full:eoi
256 * as a "replay" because EOI decided there was still something in xive_get_irq()
264 * entry (on HW interrupt) from a replay triggered by EOI, in xive_get_irq()
285 * After EOI'ing an interrupt, we need to re-check the queue
297 DBG_VERBOSE("eoi: pending=0x%02x\n", xc->pending_prio); in xive_do_queue_eoi()
303 * EOI an interrupt at the source. There are several methods
308 /* If the XIVE supports the new "store EOI facility, use it */ in xive_do_source_eoi()
319 if (WARN_ON_ONCE(!xive_ops->eoi)) in xive_do_source_eoi()
321 xive_ops->eoi(hw_irq); in xive_do_source_eoi()
326 * Otherwise for EOI, we use the special MMIO that does in xive_do_source_eoi()
328 * except for LSIs where we use the "EOI cycle" special in xive_do_source_eoi()
334 * For LSIs the HW EOI cycle is used rather than PQ bits, in xive_do_source_eoi()
351 /* irq_chip eoi callback */
361 * EOI the source if it hasn't been disabled and hasn't in xive_irq_eoi()
783 * 11, then perform an EOI. in xive_irq_retrigger()
789 * avoid calling into the backend EOI code which we don't in xive_irq_retrigger()
791 * only do EOI for LSIs anyway. in xive_irq_retrigger()
845 * This saved_p is cleared by the host EOI, when we know in xive_irq_set_vcpu_affinity()
856 * that we *will* eventually get an EOI for it on in xive_irq_set_vcpu_affinity()
898 * interrupt with an EOI. If it is set, we know there is in xive_irq_set_vcpu_affinity()
1023 DBG_VERBOSE("IPI eoi: irq=%d [0x%lx] (HW IRQ 0x%x) pending=%02x\n", in xive_ipi_eoi()
1353 * For LSIs, we EOI, this will cause a resend if it's in xive_flush_cpu_queue()