Lines Matching +full:d +full:- +full:cache +full:- +full:block +full:- +full:size
10 - correctness: the guest should not be able to determine that it is running
13 a particular implementation such as tlb size)
14 - security: the guest must not be able to touch host memory not assigned
16 - performance: minimize the performance penalty imposed by the mmu
17 - scaling: need to scale to large memory and large vcpu guests
18 - hardware: support the full range of x86 virtualization hardware
19 - integration: Linux memory management code must be in control of guest memory
22 - dirty tracking: report writes to guest memory to enable live migration
23 and framebuffer-based displays
24 - footprint: keep the amount of pinned kernel memory low (most memory
26 - reliability: avoid multipage or GFP_ATOMIC allocations
48 The mmu supports first-generation mmu hardware, which allows an atomic switch
50 two-dimensional paging (AMD's NPT and Intel's EPT). The emulated hardware
62 - when guest paging is disabled, we translate guest physical addresses to
63 host physical addresses (gpa->hpa)
64 - when guest paging is enabled, we translate guest virtual addresses, to
65 guest physical addresses, to host physical addresses (gva->gpa->hpa)
66 - when the guest launches a guest of its own, we translate nested guest
68 addresses, to host physical addresses (ngva->ngpa->gpa->hpa)
80 addresses (gpa->hva); note that two gpas may alias to the same hva, but not
93 - writes to control registers (especially cr3)
94 - invlpg/invlpga instruction execution
95 - access to missing or protected translations
98 - changes in the gpa->hpa translation (either through gpa->hva changes or
99 through hva->hpa changes)
100 - memory pressure (the shrinker)
117 The following table shows translations encoded by leaf ptes, with higher-level
120 Non-nested guests:
121 nonpaging: gpa->hpa
122 paging: gva->gpa->hpa
123 paging, tdp: (gva->)gpa->hpa
125 non-tdp: ngva->gpa->hpa (*)
126 tdp: (ngva->)ngpa->gpa->hpa
128 (*) the guest hypervisor will encode the ngva->gpa translation into its page
138 host pages, and gpa->hpa translations when NPT or EPT is active.
139 The linear range starts at (gfn << PAGE_SHIFT) and its size is determined
145 When role.gpte_is_8_bytes=0, the guest uses 32-bit gptes while the host uses 64-bit
148 For first-level shadow pages, role.quadrant can be 0 or 1 and denotes the
149 first or second 512-gpte block in the guest page table. For second-level
150 page tables, each 32-bit gpte is converted to two 64-bit sptes
151 (since each first-level guest page is shadowed by two first-level
162 Reflects the size of the guest PTE for which the page is valid, i.e. '1'
163 if 64-bit gptes are in use, '0' if 32-bit gptes are in use.
186 Is 1 if the MMU instance cannot use A/D bits. EPT did not have A/D
187 bits before Haswell; shadow EPT page tables also cannot use A/D bits
193 A pageful of 64-bit sptes containing the translations for this page.
195 The page pointed to by spt will have its page->private pointing back
197 sptes in spt point either at guest pages, or at lower-level shadow pages.
198 Specifically, if sp1 and sp2 are shadow pages, then sp1->spt[n] may point
199 at __pa(sp2->spt). sp2 will point back at sp1 through parent_pte.
231 Only present on 32-bit hosts, where a 64-bit spte cannot be written
233 to detect in-progress updates and retry them until the writer has
237 emulations if the page needs to be write-protected (see "Synchronized
240 possible for non-leafs. This field counts the number of emulations
277 - guest page fault (or npt page fault, or ept violation)
281 - a true guest fault (the guest translation won't allow the access) (*)
282 - access to a missing translation
283 - access to a protected translation
284 - when logging dirty pages, memory is write protected
285 - synchronized shadow pages are write protected (*)
286 - access to untranslatable memory (mmio)
292 - if the RSV bit of the error code is set, the page fault is caused by guest
294 - walk shadow page table
295 - check for valid generation number in the spte (see "Fast invalidation of
297 - cache the information to vcpu->arch.mmio_gva, vcpu->arch.mmio_access and
298 vcpu->arch.mmio_gfn, and call the emulator
299 - If both P bit and R/W bit of error code are set, this could possibly
302 - if needed, walk the guest page tables to determine the guest translation
303 (gva->gpa or ngpa->gpa)
304 - if permissions are insufficient, reflect the fault back to the guest
305 - determine the host page
306 - if this is an mmio request, there is no host page; cache the info to
307 vcpu->arch.mmio_gva, vcpu->arch.mmio_access and vcpu->arch.mmio_gfn
308 - walk the shadow page table to find the spte for the translation,
310 - If this is an mmio request, cache the mmio info to the spte and set some
312 - try to unsynchronize the page
313 - if successful, we can let the guest continue and modify the gpte
314 - emulate the instruction
315 - if failed, unshadow the page and let the guest continue
316 - update any translations that were modified by the instruction
320 - walk the shadow page hierarchy and drop affected translations
321 - try to reinstantiate the indicated translation in the hope that the
326 - mov to cr3
327 - look up new shadow roots
328 - synchronize newly reachable shadow pages
330 - mov to cr0/cr4/efer
331 - set up mmu context for new paging mode
332 - look up new shadow roots
333 - synchronize newly reachable shadow pages
337 - mmu notifier called with updated hva
338 - look up affected sptes through reverse map
339 - drop (or update) translations
353 - kernel write fault: spte.u=0, spte.w=1 (allows full kernel access,
355 - read fault: spte.u=1, spte.w=0 (allows full read access, disallows kernel
361 - if CR4.SMEP is enabled: since we've turned the page into a kernel page,
366 - if CR4.SMAP is disabled: since the page has been changed to a kernel
375 with one value of cr0.wp cannot be used when cr0.wp has a different value -
391 - the spte must point to a large host page
392 - the guest pte must be a large pte of at least equivalent size (if tdp is
394 - if the spte will be writeable, the large page frame may not overlap any
395 write-protected pages
396 - the guest page must be wholly contained by a single memory slot
398 To check the last two conditions, the mmu maintains a ->disallow_lpage set of
399 arrays for each memory slot and large page size. Every write protected page
402 artificially inflated ->disallow_lpages so they can never be instantiated.
407 As mentioned in "Reaction to events" above, kvm will cache MMIO
415 kvm_memslots(kvm)->generation, and increased whenever guest memory info
423 Since only 19 bits are used to store generation-number on mmio spte, all
429 out-of-date information, but with an up-to-date generation number.
432 returns; thus, bit 63 of kvm_memslots(kvm)->generation set to 1 only during a
435 this without losing a bit in the MMIO spte. The "update in-progress" bit of the
438 spte while an update is in-progress, the next access to the spte will always be
439 a cache miss. For example, a subsequent access during the update window will
440 miss due to the in-progress flag diverging, while an access after the update
447 - NPT presentation from KVM Forum 2008
448 http://www.linux-kvm.org/images/c/c8/KvmForum2008%24kdf2008_21.pdf