Lines Matching +full:guest +full:- +full:index +full:- +full:bits
1 .. SPDX-License-Identifier: GPL-2.0
8 ---------------------
12 - cpus_read_lock() is taken outside kvm_lock
14 - kvm_usage_lock is taken outside cpus_read_lock()
16 - kvm->lock is taken outside vcpu->mutex
18 - kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock
20 - kvm->slots_lock is taken outside kvm->irq_lock, though acquiring
23 - kvm->mn_active_invalidate_count ensures that pairs of
25 use the same memslots array. kvm->slots_lock and kvm->slots_arch_lock
27 must not take either kvm->slots_lock or kvm->slots_arch_lock.
31 - Taking cpus_read_lock() outside of kvm_lock is problematic, despite that
38 - ``synchronize_srcu(&kvm->srcu)`` is called inside critical sections
39 for kvm->lock, vcpu->mutex and kvm->slots_lock. These locks _cannot_
40 be taken inside a kvm->srcu read-side critical section; that is, the
43 srcu_read_lock(&kvm->srcu);
44 mutex_lock(&kvm->slots_lock);
46 - kvm->slots_arch_lock instead is released before the call to
48 kvm->srcu read-side critical section, for example while processing
53 - vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock and kvm->arch.xen.xen_lock
55 - kvm->arch.mmu_lock is an rwlock; critical sections for
56 kvm->arch.tdp_mmu_pages_lock and kvm->arch.mmu_unsync_pages_lock must
57 also take kvm->arch.mmu_lock
63 ------------
67 Fast page fault is the fast path which fixes the guest page fault out of
68 the mmu-lock on x86. Currently, the page fault can be fast in one of the
72 tracking. That means we need to restore the saved R/X bits. This is
75 2. Write-Protection: The SPTE is present and the fault is caused by
76 write-protect. That means we just need to change the W bit of the spte.
78 What we use to avoid all the races is the Host-writable bit and MMU-writable bit
81 - Host-writable means the gfn is writable in the host kernel page tables and in
83 - MMU-writable means the gfn is writable in the guest's mmu and it is not
84 write-protected by shadow page write-protection.
88 R/X bits if for an access-traced spte, or both. This is safe because whenever
89 changing these bits can be detected by cmpxchg.
99 +------------------------------------------------------------------------+
106 +------------------------------------------------------------------------+
108 +------------------------------------+-----------------------------------+
110 +------------------------------------+-----------------------------------+
114 +------------------------------------+-----------------------------------+
119 | | pfn1 is re-alloced for gfn2. |
122 | | gfn2 by the guest:: |
125 +------------------------------------+-----------------------------------+
129 | mark_page_dirty(vcpu->kvm, gfn1) |
131 +------------------------------------------------------------------------+
133 We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap.
141 - We have held the refcount of pfn; that means the pfn can not be freed and
143 - The pfn is writable and therefore it cannot be shared between different gfns
150 In the origin code, the spte can be fast updated (non-atomically) if the
151 spte is read-only and the Accessed bit has already been set since the
157 +------------------------------------------------------------------------+
162 +------------------------------------+-----------------------------------+
164 +------------------------------------+-----------------------------------+
174 +------------------------------------+-----------------------------------+
182 +------------------------------------+-----------------------------------+
192 +------------------------------------+-----------------------------------+
197 if it can be updated out of mmu-lock [see spte_has_volatile_bits()]; it means
202 If the spte is updated from writable to read-only, we should flush all TLBs,
203 otherwise rmap_write_protect will find a read-only spte, even though the
206 As mentioned before, the spte can be updated to writable out of mmu-lock on
209 function to update spte (present -> present).
211 Since the spte is "volatile" if it can be updated out of mmu-lock, we always
218 bits. In this case, PTEs are tagged as A/D disabled (using ignored bits), and
220 kvm_mmu_notifier_clear_flush_young), it marks the PTE not-present in hardware
221 by clearing the RWX bits in the PTE and storing the original R & X bits in more
222 unused/ignored bits. When the VM tries to access the page later on, a fault is
231 ------------
238 :Protects: - vm_list
245 :Protects: - kvm_usage_count
246 - hardware virtualization enable/disable
250 ``kvm->mn_invalidate_lock``
262 :Protects: - kvm_arch::{last_tsc_write,last_tsc_nsec,last_tsc_offset}
263 - tsc offset in vmcb
266 ``kvm->mmu_lock``
270 :Protects: -shadow page/shadow tlb entry
273 ``kvm->srcu``
277 :Protects: - kvm->memslots
278 - kvm->buses
280 when using gfn_to_* functions) and while accessing in-kernel
281 MMIO/PIO address->device structure mapping (kvm->buses).
282 The srcu index can be stored in kvm_vcpu->srcu_idx per vcpu
285 ``kvm->slots_arch_lock``
289 :Protects: any arch-specific fields of memslots that have to be modified
290 in a ``kvm->srcu`` read-side critical section.
299 :Comment: This is a per-CPU lock and it is used for VT-d posted-interrupts.
300 When VT-d posted-interrupts are supported and the VM has assigned
302 protected by blocked_vcpu_on_cpu_lock. When VT-d hardware issues