1KVM Lock Overview 2================= 3 41. Acquisition Orders 5--------------------- 6 7The acquisition orders for mutexes are as follows: 8 9- kvm->lock is taken outside vcpu->mutex 10 11- kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock 12 13- kvm->slots_lock is taken outside kvm->irq_lock, though acquiring 14 them together is quite rare. 15 16On x86, vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock. 17 18Everything else is a leaf: no other lock is taken inside the critical 19sections. 20 212: Exception 22------------ 23 24Fast page fault: 25 26Fast page fault is the fast path which fixes the guest page fault out of 27the mmu-lock on x86. Currently, the page fault can be fast in one of the 28following two cases: 29 301. Access Tracking: The SPTE is not present, but it is marked for access 31tracking i.e. the SPTE_SPECIAL_MASK is set. That means we need to 32restore the saved R/X bits. This is described in more detail later below. 33 342. Write-Protection: The SPTE is present and the fault is 35caused by write-protect. That means we just need to change the W bit of the 36spte. 37 38What we use to avoid all the race is the SPTE_HOST_WRITEABLE bit and 39SPTE_MMU_WRITEABLE bit on the spte: 40- SPTE_HOST_WRITEABLE means the gfn is writable on host. 41- SPTE_MMU_WRITEABLE means the gfn is writable on mmu. The bit is set when 42 the gfn is writable on guest mmu and it is not write-protected by shadow 43 page write-protection. 44 45On fast page fault path, we will use cmpxchg to atomically set the spte W 46bit if spte.SPTE_HOST_WRITEABLE = 1 and spte.SPTE_WRITE_PROTECT = 1, or 47restore the saved R/X bits if VMX_EPT_TRACK_ACCESS mask is set, or both. This 48is safe because whenever changing these bits can be detected by cmpxchg. 49 50But we need carefully check these cases: 511): The mapping from gfn to pfn 52The mapping from gfn to pfn may be changed since we can only ensure the pfn 53is not changed during cmpxchg. This is a ABA problem, for example, below case 54will happen: 55 56At the beginning: 57gpte = gfn1 58gfn1 is mapped to pfn1 on host 59spte is the shadow page table entry corresponding with gpte and 60spte = pfn1 61 62 VCPU 0 VCPU0 63on fast page fault path: 64 65 old_spte = *spte; 66 pfn1 is swapped out: 67 spte = 0; 68 69 pfn1 is re-alloced for gfn2. 70 71 gpte is changed to point to 72 gfn2 by the guest: 73 spte = pfn1; 74 75 if (cmpxchg(spte, old_spte, old_spte+W) 76 mark_page_dirty(vcpu->kvm, gfn1) 77 OOPS!!! 78 79We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap. 80 81For direct sp, we can easily avoid it since the spte of direct sp is fixed 82to gfn. For indirect sp, before we do cmpxchg, we call gfn_to_pfn_atomic() 83to pin gfn to pfn, because after gfn_to_pfn_atomic(): 84- We have held the refcount of pfn that means the pfn can not be freed and 85 be reused for another gfn. 86- The pfn is writable that means it can not be shared between different gfns 87 by KSM. 88 89Then, we can ensure the dirty bitmaps is correctly set for a gfn. 90 91Currently, to simplify the whole things, we disable fast page fault for 92indirect shadow page. 93 942): Dirty bit tracking 95In the origin code, the spte can be fast updated (non-atomically) if the 96spte is read-only and the Accessed bit has already been set since the 97Accessed bit and Dirty bit can not be lost. 98 99But it is not true after fast page fault since the spte can be marked 100writable between reading spte and updating spte. Like below case: 101 102At the beginning: 103spte.W = 0 104spte.Accessed = 1 105 106 VCPU 0 VCPU0 107In mmu_spte_clear_track_bits(): 108 109 old_spte = *spte; 110 111 /* 'if' condition is satisfied. */ 112 if (old_spte.Accessed == 1 && 113 old_spte.W == 0) 114 spte = 0ull; 115 on fast page fault path: 116 spte.W = 1 117 memory write on the spte: 118 spte.Dirty = 1 119 120 121 else 122 old_spte = xchg(spte, 0ull) 123 124 125 if (old_spte.Accessed == 1) 126 kvm_set_pfn_accessed(spte.pfn); 127 if (old_spte.Dirty == 1) 128 kvm_set_pfn_dirty(spte.pfn); 129 OOPS!!! 130 131The Dirty bit is lost in this case. 132 133In order to avoid this kind of issue, we always treat the spte as "volatile" 134if it can be updated out of mmu-lock, see spte_has_volatile_bits(), it means, 135the spte is always atomically updated in this case. 136 1373): flush tlbs due to spte updated 138If the spte is updated from writable to readonly, we should flush all TLBs, 139otherwise rmap_write_protect will find a read-only spte, even though the 140writable spte might be cached on a CPU's TLB. 141 142As mentioned before, the spte can be updated to writable out of mmu-lock on 143fast page fault path, in order to easily audit the path, we see if TLBs need 144be flushed caused by this reason in mmu_spte_update() since this is a common 145function to update spte (present -> present). 146 147Since the spte is "volatile" if it can be updated out of mmu-lock, we always 148atomically update the spte, the race caused by fast page fault can be avoided, 149See the comments in spte_has_volatile_bits() and mmu_spte_update(). 150 151Lockless Access Tracking: 152 153This is used for Intel CPUs that are using EPT but do not support the EPT A/D 154bits. In this case, when the KVM MMU notifier is called to track accesses to a 155page (via kvm_mmu_notifier_clear_flush_young), it marks the PTE as not-present 156by clearing the RWX bits in the PTE and storing the original R & X bits in 157some unused/ignored bits. In addition, the SPTE_SPECIAL_MASK is also set on the 158PTE (using the ignored bit 62). When the VM tries to access the page later on, 159a fault is generated and the fast page fault mechanism described above is used 160to atomically restore the PTE to a Present state. The W bit is not saved when 161the PTE is marked for access tracking and during restoration to the Present 162state, the W bit is set depending on whether or not it was a write access. If 163it wasn't, then the W bit will remain clear until a write access happens, at 164which time it will be set using the Dirty tracking mechanism described above. 165 1663. Reference 167------------ 168 169Name: kvm_lock 170Type: mutex 171Arch: any 172Protects: - vm_list 173 174Name: kvm_count_lock 175Type: raw_spinlock_t 176Arch: any 177Protects: - hardware virtualization enable/disable 178Comment: 'raw' because hardware enabling/disabling must be atomic /wrt 179 migration. 180 181Name: kvm_arch::tsc_write_lock 182Type: raw_spinlock 183Arch: x86 184Protects: - kvm_arch::{last_tsc_write,last_tsc_nsec,last_tsc_offset} 185 - tsc offset in vmcb 186Comment: 'raw' because updating the tsc offsets must not be preempted. 187 188Name: kvm->mmu_lock 189Type: spinlock_t 190Arch: any 191Protects: -shadow page/shadow tlb entry 192Comment: it is a spinlock since it is used in mmu notifier. 193 194Name: kvm->srcu 195Type: srcu lock 196Arch: any 197Protects: - kvm->memslots 198 - kvm->buses 199Comment: The srcu read lock must be held while accessing memslots (e.g. 200 when using gfn_to_* functions) and while accessing in-kernel 201 MMIO/PIO address->device structure mapping (kvm->buses). 202 The srcu index can be stored in kvm_vcpu->srcu_idx per vcpu 203 if it is needed by multiple functions. 204 205Name: blocked_vcpu_on_cpu_lock 206Type: spinlock_t 207Arch: x86 208Protects: blocked_vcpu_on_cpu 209Comment: This is a per-CPU lock and it is used for VT-d posted-interrupts. 210 When VT-d posted-interrupts is supported and the VM has assigned 211 devices, we put the blocked vCPU on the list blocked_vcpu_on_cpu 212 protected by blocked_vcpu_on_cpu_lock, when VT-d hardware issues 213 wakeup notification event since external interrupts from the 214 assigned devices happens, we will find the vCPU on the list to 215 wakeup. 216