Home
last modified time | relevance | path

Searched full:guest (Results 1 – 25 of 170) sorted by relevance

1234567

/Documentation/virt/kvm/x86/
Dmmu.rst8 for presenting a standard x86 mmu to the guest, while translating guest
14 the guest should not be able to determine that it is running
19 the guest must not be able to touch host memory not assigned
28 Linux memory management code must be in control of guest memory
32 report writes to guest memory to enable live migration
47 gfn guest frame number
48 gpa guest physical address
49 gva guest virtual address
50 ngpa nested guest physical address
51 ngva nested guest virtual address
[all …]
Drunning-nested-guests.rst7 A nested guest is the ability to run a guest inside another guest (it
9 example is a KVM guest that in turn runs on a KVM guest (the rest of
15 | (Nested Guest) | | (Nested Guest) |
19 | L1 (Guest Hypervisor) |
33 - L1 – level-1 guest; a VM running on L0; also called the "guest
36 - L2 – level-2 guest; a VM running on L1, this is the "nested guest"
46 (guest hypervisor), L3 (nested guest).
61 Provider, using nested KVM lets you rent a large enough "guest
62 hypervisor" (level-1 guest). This in turn allows you to create
66 - Live migration of "guest hypervisors" and their nested guests, for
[all …]
Damd-memory-encryption.rst98 __u16 ghcb_version; /* maximum guest GHCB version allowed */
108 requests. If ``ghcb_version`` is 0 for any other guest type, then the maximum
109 allowed guest GHCB protocol will default to version 2.
133 context. To create the encryption context, user must provide a guest policy,
144 __u32 policy; /* guest's policy */
146 … __u64 dh_uaddr; /* userspace address pointing to the guest owner's PDH key */
149 … __u64 session_addr; /* userspace address which points to the guest session information */
164 of the memory contents that can be sent to the guest owner as an attestation
184 data encrypted by the KVM_SEV_LAUNCH_UPDATE_DATA command. The guest owner may
185 wait to provide the guest with confidential information until it can verify the
[all …]
Dmsr.rst25 in guest RAM. This memory is expected to hold a copy of the following
40 guest has to check version before and after grabbing
64 guest RAM, plus an enable bit in bit 0. This memory is expected to hold
87 guest has to check version before and after grabbing
127 coordinated between the guest and the hypervisor. Availability
139 | | | guest vcpu has been paused by |
196 which must be in guest RAM. This memory is expected to hold the
220 a token that will be used to notify the guest when missing page becomes
224 is currently supported, when set, it indicates that the guest is dealing
226 'flags' is '0' it means that this is regular page fault. Guest is
[all …]
Dhypercalls.rst54 :Purpose: Trigger guest exit so that the host can check for pending
70 :Purpose: Expose hypercall availability to the guest. On x86 platforms, cpuid
81 :Purpose: To enable communication between the hypervisor and guest there is a
83 The guest can map this shared page to access its supervisor register
93 A vcpu of a paravirtualized guest that is busywaiting in guest
98 same guest can wakeup the sleeping vcpu by issuing KVM_HC_KICK_CPU hypercall,
107 :Purpose: Hypercall used to synchronize host and guest clocks.
111 a0: guest physical address where host copies
130 * tsc: guest TSC value used to calculate sec/nsec pair
133 The hypercall lets a guest compute a precise timestamp across
[all …]
Dcpuid.rst9 A guest running on a kvm host, can check some of its features using
12 a guest.
65 KVM_FEATURE_PV_UNHALT 7 guest checks this feature bit
69 KVM_FEATURE_PV_TLB_FLUSH 9 guest checks this feature bit
77 KVM_FEATURE_PV_SEND_IPI 11 guest checks this feature bit
85 KVM_FEATURE_PV_SCHED_YIELD 13 guest checks this feature bit
89 KVM_FEATURE_ASYNC_PF_INT 14 guest checks this feature bit
95 KVM_FEATURE_MSI_EXT_DEST_ID 15 guest checks this feature bit
99 KVM_FEATURE_HC_MAP_GPA_RANGE 16 guest checks this feature bit before
103 KVM_FEATURE_MIGRATION_CONTROL 17 guest checks this feature bit before
[all …]
Derrata.rst27 Clearing these bits in CPUID has no effect on the operation of the guest;
31 **Workaround:** It is recommended to always set these bits in guest CPUID.
54 KVM does not virtualize guest MTRR memory types. KVM emulates accesses to MTRR
55 MSRs, i.e. {RD,WR}MSR in the guest will behave as expected, but KVM does not
56 honor guest MTRRs when determining the effective memory type, and instead
57 treats all of guest memory as having Writeback (WB) MTRRs.
63 expected, but setting CR0.CD=1 has no impact on the cachaeability of guest
68 running in the guest.
/Documentation/virt/hyperv/
Dcoco.rst25 * AMD processor with SEV-SNP. Hyper-V does not run guest VMs with AMD SME,
40 * Fully-enlightened mode. In this mode, the guest operating system is
43 * Paravisor mode. In this mode, a paravisor layer between the guest and the
44 host provides some operations needed to run as a CoCo VM. The guest operating
49 points on a spectrum spanning the degree of guest enlightenment needed to run
53 guest OS with no knowledge of memory encryption or other aspects of CoCo VMs
56 aspects of CoCo VMs are handled by the Hyper-V paravisor while the guest OS
59 paravisor, and there is no standardized mechanism for a guest OS to query the
61 the paravisor provides is hard-coded in the guest OS.
64 a limited paravisor to provide services to the guest such as a virtual TPM.
[all …]
Dvmbus.rst5 VMBus is a software construct provided by Hyper-V to guest VMs. It
7 devices that Hyper-V presents to guest VMs. The control path is
8 used to offer synthetic devices to the guest VM and, in some cases,
10 channels for communicating between the device driver in the guest VM
12 signaling primitives to allow Hyper-V and the guest to interrupt
16 entry in a running Linux guest. The VMBus driver (drivers/hv/vmbus_drv.c)
37 Guest VMs may have multiple instances of the synthetic SCSI
47 the device in the guest VM. For example, the Linux driver for the
65 guest, and the "out" ring buffer is for messages from the guest to
67 viewed by the guest side. The ring buffers are memory that is
[all …]
Dvpci.rst5 In a Hyper-V guest VM, PCI pass-thru devices (also called
8 Guest device drivers can interact directly with the hardware
12 hypervisor. The device should appear to the guest just as it
24 and produces the same benefits by allowing a guest device
55 do not appear in the Linux guest's ACPI tables. vPCI devices
68 in the guest, or if the vPCI device is removed from
95 hv_pci_probe() allocates a guest MMIO range to be used as PCI
99 hv_pci_enter_d0(). When the guest subsequently accesses this
118 guest VM at any time during the life of the VM. The removal
120 is not under the control of the guest OS.
[all …]
Doverview.rst6 enlightened guest on Microsoft's Hyper-V hypervisor. Hyper-V
9 equivalent to KVM and QEMU, for example). Guest VMs run in child
19 Linux Guest Communication with Hyper-V
24 some guest actions trap to Hyper-V. Hyper-V emulates the action and
25 returns control to the guest. This behavior is generally invisible
31 processor registers or in memory shared between the Linux guest and
38 the guest, and the Linux kernel can read or write these MSRs using
45 the Hyper-V host and the Linux guest. It uses memory that is shared
46 between Hyper-V and the guest, along with various signaling
70 * Linux tells Hyper-V the guest physical address (GPA) of the
[all …]
Dclocks.rst9 and timer. Guest VMs use this virtualized hardware as the Linux
12 architectural system counter is functional in guest VMs on Hyper-V.
15 Linux kernel in a Hyper-V guest on arm64. However, older versions
24 On x86/x64, Hyper-V provides guest VMs with a synthetic system clock
30 to the guest VM via a synthetic MSR. Hyper-V initialization code
35 guest VMs.
39 the guest can configure a memory page to be shared between the guest
42 value, the guest reads the TSC and then applies the scale and offset
52 When a Linux guest detects that this Hyper-V functionality is
/Documentation/security/
Dsnp-tdx-threat-model.rst46 integrity for the VM's guest memory and execution state (vCPU registers),
47 more tightly controlled guest interrupt injection, as well as some
48 additional mechanisms to control guest-host page mapping. More details on
53 The basic CoCo guest layout includes the host, guest, the interfaces that
54 communicate guest and host, a platform capable of supporting CoCo VMs, and
55 a trusted intermediary between the guest VM and the underlying platform
58 is still in charge of the guest lifecycle, i.e. create or destroy a CoCo
65 the rest of the components (data flow for guest, host, hardware) ::
68 | CoCo guest VM |<---->| |
136 (in contrast to a remote network attacker) and has control over the guest
[all …]
/Documentation/virt/kvm/s390/
Ds390-pv.rst10 access VM state like guest memory or guest registers. Instead, the
15 Each guest starts in non-protected mode and then may make a request to
16 transition into protected mode. On transition, KVM registers the guest
20 The Ultravisor will secure and decrypt the guest's boot memory
22 starts/stops and injected interrupts while the guest is running.
24 As access to the guest's state, such as the SIE state description, is
29 reduce exposed guest state.
40 field (offset 0x54). If the guest cpu is not enabled for the interrupt
50 access to the guest memory.
72 Secure Interception General Register Save Area. Guest GRs and most of
[all …]
/Documentation/ABI/testing/
Dsysfs-hypervisor-xen6 Type of guest:
7 "Xen": standard guest type on arm
8 "HVM": fully virtualized guest (x86)
9 "PV": paravirtualized guest (x86)
10 "PVH": fully virtualized guest without legacy emulation (x86)
22 "self" The guest can profile itself
23 "hv" The guest can profile itself and, if it is
25 "all" The guest can profile itself, the hypervisor
Dsysfs-driver-pciback7 the format of DDDD:BB:DD.F-REG:SIZE:MASK will allow the guest
14 will allow the guest to read and write to the configuration
23 MSI, MSI-X) set by a connected guest. It is meant to be set
24 only when the guest is a stubdomain hosting device model (qemu)
27 to a PV guest. The device is automatically removed from this
/Documentation/arch/x86/
Dtdx.rst7 Intel's Trust Domain Extensions (TDX) protect confidential guest VMs from
8 the host and physical attacks by isolating the guest register state and by
9 encrypting the guest memory. In TDX, a special module running in a special
10 mode sits between the host and the guest and manages the guest/host
196 TDX Guest Support
198 Since the host cannot directly access guest registers or memory, much
199 normal functionality of a hypervisor must be moved into the guest. This is
201 guest kernel. A #VE is handled entirely inside the guest kernel, but some
205 guest to the hypervisor or the TDX module.
249 indicates a bug in the guest. The guest may try to handle the #GP with a
[all …]
Damd-memory-encryption.rst17 of the guest VM are secured so that a decrypted version is available only
18 within the VM itself. SEV guest VMs have the concept of private and shared
19 memory. Private memory is encrypted with the guest-specific key, while shared
36 When SEV is enabled, instruction pages and guest page tables are always treated
37 as private. All the DMA operations inside the guest must be performed on shared
38 memory. Since the memory encryption bit is controlled by the guest OS when it
53 system physical addresses, not guest physical
104 guest side implementation to function correctly. The below table lists the
105 expected guest behavior with various possible scenarios of guest/hypervisor
109 | Feature Enabled | Guest needs | Guest has | Guest boot |
[all …]
/Documentation/virt/coco/
Dsev-guest.rst4 The Definitive SEV Guest API Documentation
10 The SEV API is a set of ioctls that are used by the guest or hypervisor
17 - Guest ioctls: These query and set attributes of the SEV virtual machine.
22 This section describes ioctls that is used for querying the SEV guest report
30 hypervisor or guest. The ioctl can be used inside the guest or the
40 The guest ioctl should be issued on a file descriptor of the /dev/sev-guest
47 the guests message sequence counter. If guest driver fails to increment message
91 :Type: guest ioctl
106 :Type: guest ioctl
111 The derived key can be used by the guest for any purpose, such as sealing keys
[all …]
/Documentation/virt/kvm/
Dvcpu-requests.rst48 The goal of a VCPU kick is to bring a VCPU thread out of guest mode in
50 a guest mode exit. However, a VCPU thread may not be in guest mode at the
55 1) Send an IPI. This forces a guest mode exit.
56 2) Waking a sleeping VCPU. Sleeping VCPUs are VCPU threads outside guest
60 3) Nothing. When the VCPU is not in guest mode and the VCPU thread is not
67 guest is running in guest mode or not, as well as some specific
68 outside guest mode states. The architecture may use ``vcpu->mode`` to
76 The VCPU thread is outside guest mode.
80 The VCPU thread is in guest mode.
89 The VCPU thread is outside guest mode, but it wants the sender of
[all …]
/Documentation/arch/s390/
Dvfio-ap.rst122 Let's now take a look at how AP instructions executed on a guest are interpreted
128 control domains assigned to the KVM guest:
131 to the KVM guest. Each bit in the mask, from left to right, corresponds to
133 use by the KVM guest.
136 assigned to the KVM guest. Each bit in the mask, from left to right,
138 corresponding queue is valid for use by the KVM guest.
141 assigned to the KVM guest. The ADM bit mask controls which domains can be
143 guest. Each bit in the mask, from left to right, corresponds to a domain from
153 adapters 1 and 2 and usage domains 5 and 6 are assigned to a guest, the APQNs
154 (1,5), (1,6), (2,5) and (2,6) will be valid for the guest.
[all …]
/Documentation/virt/geniezone/
Dintroduction.rst11 secure boot. It can create guest VMs for security use cases and has
40 The gzvm hypervisor emulates a virtual mobile platform for guest OS running on
41 guest VM. The platform supports various architecture-defined devices, such as
46 Communication among guest VMs is provided mainly on RPC. More communication
54 events in guest VMs need to be processed.
58 All interrupts during some guest VMs running are handled by GenieZone
60 In case there's no guest VM running, physical interrupts are handled by host
87 vIRQ injection in guest VMs via GIC.
/Documentation/filesystems/
Dvirtiofs.rst6 virtiofs: virtio-fs host<->guest shared file system
14 VIRTIO "virtio-fs" device for guest<->host file system sharing. It allows a
15 guest to mount a directory that has been exported on the host.
24 expose the storage network to the guest. The virtio-fs device was designed to
28 guest and host to increase performance and provide semantics that are not
37 guest# mount -t virtiofs myfs /mnt
60 client. The guest acts as the FUSE client while the host acts as the FUSE
65 response portion of the buffer is filled in by the host and the guest handles
/Documentation/virt/kvm/arm/
Dfw-pseudo-registers.rst11 This means that a guest booted on two different versions of KVM can observe
12 two different "firmware" revisions. This could cause issues if a given guest
15 guest.
28 and power-off to the guest.
40 offered by KVM to the guest via a HVC call. The workaround is described
48 guest is unknown.
51 available to the guest and required for the mitigation.
54 is available to the guest, but it is not needed on this VCPU.
58 offered by KVM to the guest via a HVC call. The workaround is described
83 bitmap is translated to the services that are available to the guest.
[all …]
/Documentation/arch/arm64/
Dperf.rst28 The kernel runs at EL2 with VHE and EL1 without. Guest kernels always run
34 For the guest this attribute will exclude EL1. Please note that EL2 is
35 never counted within a guest.
48 guest/host transitions.
50 For the guest this attribute has no effect. Please note that EL2 is
51 never counted within a guest.
57 These attributes exclude the KVM host and guest, respectively.
62 The KVM guest may run at EL0 (userspace) and EL1 (kernel).
66 must enable/disable counting on the entry and exit to the guest. This is
70 exiting the guest we disable/enable the event as appropriate based on the
[all …]

1234567