• Home
  • Raw
  • Download

Lines Matching +full:fast +full:- +full:read

1 .. SPDX-License-Identifier: GPL-2.0
8 ---------------------
12 - cpus_read_lock() is taken outside kvm_lock
14 - kvm->lock is taken outside vcpu->mutex
16 - kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock
18 - kvm->slots_lock is taken outside kvm->irq_lock, though acquiring
21 - kvm->mn_active_invalidate_count ensures that pairs of
23 use the same memslots array. kvm->slots_lock and kvm->slots_arch_lock
25 must not take either kvm->slots_lock or kvm->slots_arch_lock.
29 - ``synchronize_srcu(&kvm->srcu)`` is called inside critical sections
30 for kvm->lock, vcpu->mutex and kvm->slots_lock. These locks _cannot_
31 be taken inside a kvm->srcu read-side critical section; that is, the
34 srcu_read_lock(&kvm->srcu);
35 mutex_lock(&kvm->slots_lock);
37 - kvm->slots_arch_lock instead is released before the call to
39 kvm->srcu read-side critical section, for example while processing
44 - vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock and kvm->arch.xen.xen_lock
46 - kvm->arch.mmu_lock is an rwlock. kvm->arch.tdp_mmu_pages_lock and
47 kvm->arch.mmu_unsync_pages_lock are taken inside kvm->arch.mmu_lock, and
48 cannot be taken without already holding kvm->arch.mmu_lock (typically with
55 ------------
57 Fast page fault:
59 Fast page fault is the fast path which fixes the guest page fault out of
60 the mmu-lock on x86. Currently, the page fault can be fast in one of the
67 2. Write-Protection: The SPTE is present and the fault is caused by
68 write-protect. That means we just need to change the W bit of the spte.
70 What we use to avoid all the races is the Host-writable bit and MMU-writable bit
73 - Host-writable means the gfn is writable in the host kernel page tables and in
75 - MMU-writable means the gfn is writable in the guest's mmu and it is not
76 write-protected by shadow page write-protection.
78 On fast page fault path, we will use cmpxchg to atomically set the spte W
80 R/X bits if for an access-traced spte, or both. This is safe because whenever
91 +------------------------------------------------------------------------+
98 +------------------------------------------------------------------------+
99 | On fast page fault path: |
100 +------------------------------------+-----------------------------------+
102 +------------------------------------+-----------------------------------+
106 +------------------------------------+-----------------------------------+
111 | | pfn1 is re-alloced for gfn2. |
117 +------------------------------------+-----------------------------------+
121 | mark_page_dirty(vcpu->kvm, gfn1) |
123 +------------------------------------------------------------------------+
125 We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap.
128 to gfn. For indirect sp, we disabled fast page fault for simplicity.
133 - We have held the refcount of pfn; that means the pfn can not be freed and
135 - The pfn is writable and therefore it cannot be shared between different gfns
142 In the origin code, the spte can be fast updated (non-atomically) if the
143 spte is read-only and the Accessed bit has already been set since the
146 But it is not true after fast page fault since the spte can be marked
149 +------------------------------------------------------------------------+
154 +------------------------------------+-----------------------------------+
156 +------------------------------------+-----------------------------------+
166 +------------------------------------+-----------------------------------+
167 | | on fast page fault path:: |
174 +------------------------------------+-----------------------------------+
184 +------------------------------------+-----------------------------------+
189 if it can be updated out of mmu-lock [see spte_has_volatile_bits()]; it means
194 If the spte is updated from writable to read-only, we should flush all TLBs,
195 otherwise rmap_write_protect will find a read-only spte, even though the
198 As mentioned before, the spte can be updated to writable out of mmu-lock on
199 fast page fault path. In order to easily audit the path, we see if TLBs needing
201 function to update spte (present -> present).
203 Since the spte is "volatile" if it can be updated out of mmu-lock, we always
204 atomically update the spte and the race caused by fast page fault can be avoided.
212 kvm_mmu_notifier_clear_flush_young), it marks the PTE not-present in hardware
215 generated and the fast page fault mechanism described above is used to
223 ------------
230 :Protects: - vm_list
231 - kvm_usage_count
232 - hardware virtualization enable/disable
236 ``kvm->mn_invalidate_lock``
248 :Protects: - kvm_arch::{last_tsc_write,last_tsc_nsec,last_tsc_offset}
249 - tsc offset in vmcb
252 ``kvm->mmu_lock``
256 :Protects: -shadow page/shadow tlb entry
259 ``kvm->srcu``
263 :Protects: - kvm->memslots
264 - kvm->buses
265 :Comment: The srcu read lock must be held while accessing memslots (e.g.
266 when using gfn_to_* functions) and while accessing in-kernel
267 MMIO/PIO address->device structure mapping (kvm->buses).
268 The srcu index can be stored in kvm_vcpu->srcu_idx per vcpu
271 ``kvm->slots_arch_lock``
275 :Protects: any arch-specific fields of memslots that have to be modified
276 in a ``kvm->srcu`` read-side critical section.
285 :Comment: This is a per-CPU lock and it is used for VT-d posted-interrupts.
286 When VT-d posted-interrupts are supported and the VM has assigned
288 protected by blocked_vcpu_on_cpu_lock. When VT-d hardware issues