Lines Matching +full:linear +full:- +full:mapping +full:- +full:mode
10 - correctness: the guest should not be able to determine that it is running
14 - security: the guest must not be able to touch host memory not assigned
16 - performance: minimize the performance penalty imposed by the mmu
17 - scaling: need to scale to large memory and large vcpu guests
18 - hardware: support the full range of x86 virtualization hardware
19 - integration: Linux memory management code must be in control of guest memory
22 - dirty tracking: report writes to guest memory to enable live migration
23 and framebuffer-based displays
24 - footprint: keep the amount of pinned kernel memory low (most memory
26 - reliability: avoid multipage or GFP_ATOMIC allocations
48 The mmu supports first-generation mmu hardware, which allows an atomic switch
49 of the current paging mode and cr3 during guest entry, as well as
50 two-dimensional paging (AMD's NPT and Intel's EPT). The emulated hardware
62 - when guest paging is disabled, we translate guest physical addresses to
63 host physical addresses (gpa->hpa)
64 - when guest paging is enabled, we translate guest virtual addresses, to
65 guest physical addresses, to host physical addresses (gva->gpa->hpa)
66 - when the guest launches a guest of its own, we translate nested guest
68 addresses, to host physical addresses (ngva->ngpa->gpa->hpa)
73 direct mode; otherwise it operates in shadow mode (see below).
80 addresses (gpa->hva); note that two gpas may alias to the same hva, but not
93 - writes to control registers (especially cr3)
94 - invlpg/invlpga instruction execution
95 - access to missing or protected translations
98 - changes in the gpa->hpa translation (either through gpa->hva changes or
99 through hva->hpa changes)
100 - memory pressure (the shrinker)
117 The following table shows translations encoded by leaf ptes, with higher-level
120 Non-nested guests:
121 nonpaging: gpa->hpa
122 paging: gva->gpa->hpa
123 paging, tdp: (gva->)gpa->hpa
125 non-tdp: ngva->gpa->hpa (*)
126 tdp: (ngva->)ngpa->gpa->hpa
128 (*) the guest hypervisor will encode the ngva->gpa translation into its page
136 If set, leaf sptes reachable from this page are for a linear range.
137 Examples include real mode translation, large guest pages backed by small
138 host pages, and gpa->hpa translations when NPT or EPT is active.
139 The linear range starts at (gfn << PAGE_SHIFT) and its size is determined
145 When role.cr4_pae=0, the guest uses 32-bit gptes while the host uses 64-bit
148 For first-level shadow pages, role.quadrant can be 0 or 1 and denotes the
149 first or second 512-gpte block in the guest page table. For second-level
150 page tables, each 32-bit gpte is converted to two 64-bit sptes
151 (since each first-level guest page is shadowed by two first-level
163 32-bit or 64-bit gptes are in use).
177 Is 1 if the page is valid in system management mode. This field
188 page, or the base page frame for linear translations. See role.direct.
190 A pageful of 64-bit sptes containing the translations for this page.
192 The page pointed to by spt will have its page->private pointing back
194 sptes in spt point either at guest pages, or at lower-level shadow pages.
195 Specifically, if sp1 and sp2 are shadow pages, then sp1->spt[n] may point
196 at __pa(sp2->spt). sp2 will point back at sp1 through parent_pte.
209 The reverse mapping for the pte/ptes pointing at this page's spt. If
228 Generation number of the page. It is compared with kvm->arch.mmu_valid_gen
232 Only present on 32-bit hosts, where a 64-bit spte cannot be written
234 to detect in-progress updates and retry them until the writer has
238 emulations if the page needs to be write-protected (see "Synchronized
241 possible for non-leafs. This field counts the number of emulations
249 The mmu maintains a reverse mapping whereby all ptes mapping a page can be
278 - guest page fault (or npt page fault, or ept violation)
282 - a true guest fault (the guest translation won't allow the access) (*)
283 - access to a missing translation
284 - access to a protected translation
285 - when logging dirty pages, memory is write protected
286 - synchronized shadow pages are write protected (*)
287 - access to untranslatable memory (mmio)
289 (*) not applicable in direct mode
293 - if the RSV bit of the error code is set, the page fault is caused by guest
295 - walk shadow page table
296 - check for valid generation number in the spte (see "Fast invalidation of
298 - cache the information to vcpu->arch.mmio_gva, vcpu->arch.access and
299 vcpu->arch.mmio_gfn, and call the emulator
300 - If both P bit and R/W bit of error code are set, this could possibly
303 - if needed, walk the guest page tables to determine the guest translation
304 (gva->gpa or ngpa->gpa)
305 - if permissions are insufficient, reflect the fault back to the guest
306 - determine the host page
307 - if this is an mmio request, there is no host page; cache the info to
308 vcpu->arch.mmio_gva, vcpu->arch.access and vcpu->arch.mmio_gfn
309 - walk the shadow page table to find the spte for the translation,
311 - If this is an mmio request, cache the mmio info to the spte and set some
313 - try to unsynchronize the page
314 - if successful, we can let the guest continue and modify the gpte
315 - emulate the instruction
316 - if failed, unshadow the page and let the guest continue
317 - update any translations that were modified by the instruction
321 - walk the shadow page hierarchy and drop affected translations
322 - try to reinstantiate the indicated translation in the hope that the
327 - mov to cr3
328 - look up new shadow roots
329 - synchronize newly reachable shadow pages
331 - mov to cr0/cr4/efer
332 - set up mmu context for new paging mode
333 - look up new shadow roots
334 - synchronize newly reachable shadow pages
338 - mmu notifier called with updated hva
339 - look up affected sptes through reverse map
340 - drop (or update) translations
351 We handle this by mapping the permissions to two possible sptes, depending
354 - kernel write fault: spte.u=0, spte.w=1 (allows full kernel access,
356 - read fault: spte.u=1, spte.w=0 (allows full read access, disallows kernel
362 - if CR4.SMEP is enabled: since we've turned the page into a kernel page,
367 - if CR4.SMAP is disabled: since the page has been changed to a kernel
376 with one value of cr0.wp cannot be used when cr0.wp has a different value -
392 - the spte must point to a large host page
393 - the guest pte must be a large pte of at least equivalent size (if tdp is
395 - if the spte will be writeable, the large page frame may not overlap any
396 write-protected pages
397 - the guest page must be wholly contained by a single memory slot
399 To check the last two conditions, the mmu maintains a ->disallow_lpage set of
403 artificially inflated ->disallow_lpages so they can never be instantiated.
413 which is stored in kvm->arch.mmu_valid_gen. Every shadow page stores
414 the current global generation-number into sp->mmu_valid_gen when it
418 generation-number then reload root shadow pages on all vcpus. As the VCPUs
437 kvm_memslots(kvm)->generation, and increased whenever guest memory info
446 Since only 19 bits are used to store generation-number on mmio spte, all
452 out-of-date information, but with an up-to-date generation number.
455 returns; thus, the low bit of kvm_memslots(kvm)->generation is only 1 during a
467 - NPT presentation from KVM Forum 2008
468 http://www.linux-kvm.org/images/c/c8/KvmForum2008%24kdf2008_21.pdf