• Home
  • Raw
  • Download

Lines Matching full:eoi

315 	 * as a "replay" because EOI decided there was still something  in xive_get_irq()
323 * entry (on HW interrupt) from a replay triggered by EOI, in xive_get_irq()
344 * After EOI'ing an interrupt, we need to re-check the queue
356 DBG_VERBOSE("eoi: pending=0x%02x\n", xc->pending_prio); in xive_do_queue_eoi()
362 * EOI an interrupt at the source. There are several methods
368 /* If the XIVE supports the new "store EOI facility, use it */ in xive_do_source_eoi()
379 if (WARN_ON_ONCE(!xive_ops->eoi)) in xive_do_source_eoi()
381 xive_ops->eoi(hw_irq); in xive_do_source_eoi()
386 * Otherwise for EOI, we use the special MMIO that does in xive_do_source_eoi()
388 * except for LSIs where we use the "EOI cycle" special in xive_do_source_eoi()
394 * For LSIs the HW EOI cycle is used rather than PQ bits, in xive_do_source_eoi()
411 /* irq_chip eoi callback, called with irq descriptor lock held */
421 * EOI the source if it hasn't been disabled and hasn't in xive_irq_eoi()
843 * 11, then perform an EOI. in xive_irq_retrigger()
849 * avoid calling into the backend EOI code which we don't in xive_irq_retrigger()
851 * only do EOI for LSIs anyway. in xive_irq_retrigger()
913 * This saved_p is cleared by the host EOI, when we know in xive_irq_set_vcpu_affinity()
923 * that we *will* eventually get an EOI for it on in xive_irq_set_vcpu_affinity()
964 * interrupt with an EOI. If it is set, we know there is in xive_irq_set_vcpu_affinity()
1117 DBG_VERBOSE("IPI eoi: irq=%d [0x%lx] (HW IRQ 0x%x) pending=%02x\n", in xive_ipi_eoi()
1452 * For LSIs, we EOI, this will cause a resend if it's in xive_flush_cpu_queue()