| /Documentation/virt/ |
| D | ne_overview.rst | 14 For example, an application that processes sensitive data and runs in a VM, 15 can be separated from other applications running in the same VM. This 16 application then runs in a separate VM than the primary VM, namely an enclave. 17 It runs alongside the VM that spawned it. This setup matches low latency 24 carved out of the primary VM. Each enclave is mapped to a process running in the 25 primary VM, that communicates with the NE kernel driver via an ioctl interface. 30 VM guest that uses the provided ioctl interface of the NE driver to spawn an 31 enclave VM (that's 2 below). 33 There is a NE emulated PCI device exposed to the primary VM. The driver for this 39 hypervisor running on the host where the primary VM is running. The Nitro [all …]
|
| /Documentation/devicetree/bindings/hwmon/ |
| D | moortec,mr75203.yaml | 20 *) Voltage Monitor (VM) - used to monitor voltage levels (e.g. mr74138). 26 be presented for VM for measurement within its range (e.g. mr76006 - 29 TS, VM & PD also include a digital interface, which consists of configuration 54 - const: vm 56 intel,vm-map: 58 PVT controller has 5 VM (voltage monitor) sensors. 59 vm-map defines CPU core to VM instance mapping. A 60 value of 0xff means that VM sensor is unused. 73 moortec,vm-active-channels: 75 Defines the number of channels per VM that are actually used and are [all …]
|
| /Documentation/virt/geniezone/ |
| D | introduction.rst | 15 kernel for vCPU scheduling, memory management, inter-VM communication and virtio 27 VM manager aims to provide vCPUs on the basis of time sharing on physical 28 CPUs. It requires Linux kernel in host VM for vCPU scheduling and VM power 41 guest VM. The platform supports various architecture-defined devices, such as 44 - Inter-VM Communication 53 Ioeventfd is implemented using eventfd for signaling host VM that some IO 60 In case there's no guest VM running, physical interrupts are handled by host 61 VM directly for performance reason. Irqfd is also implemented using eventfd 67 - vm 69 The vm component is responsible for setting up the capability and memory [all …]
|
| /Documentation/virt/hyperv/ |
| D | coco.rst | 7 the confidentiality and integrity of data in the VM's memory, even in the 9 CoCo VMs on Hyper-V share the generic CoCo VM threat model and security 14 A Linux CoCo VM on Hyper-V requires the cooperation and interaction of the 21 * The VM runs a version of Linux that supports being a CoCo VM 27 VM on Hyper-V. 31 To create a CoCo VM, the "Isolated VM" attribute must be specified to Hyper-V 32 when the VM is created. A VM cannot be changed from a CoCo VM to a normal VM, 37 Hyper-V CoCo VMs can run in two modes. The mode is selected when the VM is 38 created and cannot be changed during the life of the VM. 41 enlightened to understand and manage all aspects of running as a CoCo VM. [all …]
|
| D | vpci.rst | 5 In a Hyper-V guest VM, PCI pass-thru devices (also called 7 that are mapped directly into the VM's physical address space. 56 may be added to a VM or removed from a VM at any time during 57 the life of the VM, and not just during initial boot. 69 the VM while the VM is running. The ongoing operation of the 91 across reboots of the same VM so that the PCI domainIDs don't 118 guest VM at any time during the life of the VM. The removal 122 A guest VM is notified of the removal by an unsolicited 142 After sending the Eject message, Hyper-V allows the guest VM 153 during the guest VM lifecycle, proper synchronization in the [all …]
|
| /Documentation/virt/acrn/ |
| D | introduction.rst | 7 hardware. It has a privileged management VM, called Service VM, to manage User 10 ACRN userspace is an application running in the Service VM that emulates 11 devices for a User VM based on command line configurations. ACRN Hypervisor 12 Service Module (HSM) is a kernel module in the Service VM which provides 19 Service VM User VM 35 ACRN userspace allocates memory for the User VM, configures and initializes the 36 devices used by the User VM, loads the virtual bootloader, initializes the 37 virtual CPU state and handles I/O request accesses from the User VM. It uses
|
| D | io-request.rst | 6 An I/O request of a User VM, which is constructed by the hypervisor, is 14 For each User VM, there is a shared 4-KByte memory region used for I/O requests 15 communication between the hypervisor and Service VM. An I/O request is a 18 VM. ACRN userspace in the Service VM first allocates a 4-KByte page and passes 26 An I/O client is responsible for handling User VM I/O requests whose accessed 28 User VM. There is a special client associated with each User VM, called the 31 VM. 39 | Service VM | 88 state when a trapped I/O access happens in a User VM. 90 the Service VM.
|
| /Documentation/virt/gunyah/ |
| D | index.rst | 31 2. "Proxy" scheduling in which an owner-VM can donate the remainder of its 32 own vCPU's time slice to an owned-VM's vCPU via a hypercall. 43 assigned VM. Gunyah makes use of hardware interrupt virtualization where 46 - Inter-VM Communication: 63 Para-virtualization of devices is supported using inter-VM communication and 79 For example, inter-VM communication using Gunyah doorbells and message queues 94 The Gunyah Resource Manager (RM) is a privileged application VM supporting the 123 - VM lifecycle management: allocating a VM, starting VMs, destruction of VMs 124 - VM access control policy, including memory sharing and lending 126 - Forwarding of system-level events (e.g. VM shutdown) to owner VM [all …]
|
| /Documentation/virt/kvm/s390/ |
| D | s390-pv-dump.rst | 10 Dumping a VM is an essential tool for debugging problems inside 11 it. This is especially true when a protected VM runs into trouble as 15 However when dumping a protected VM we need to maintain its 16 confidentiality until the dump is in the hands of the VM owner who 19 The confidentiality of the VM dump is ensured by the Ultravisor who 22 Communication Key which is the key that's used to encrypt VM data in a 34 and extracts dump keys with which the VM dump data will be encrypted. 38 Currently there are two types of data that can be gathered from a VM:
|
| /Documentation/networking/ |
| D | net_failover.rst | 24 datapath. It also enables hypervisor controlled live migration of a VM with 72 Booting a VM with the above configuration will result in the following 3 73 interfaces created in the VM: 92 device; and on the first boot, the VM might end up with both 'failover' device 94 This will result in lack of connectivity to the VM. So some tweaks might be 113 Live Migration of a VM with SR-IOV VF & virtio-net in STANDBY mode 121 the source hypervisor. Note: It is assumed that the VM is connected to a 123 device to the VM. This is not the VF that was passthrough'd to the VM (seen in 139 DOMAIN=vm-01 143 TAP_IF=vmtap01 # virtio-net interface in the VM. [all …]
|
| /Documentation/translations/zh_CN/mm/ |
| D | overcommit-accounting.rst | 33 超量使用策略是通过sysctl `vm.overcommit_memory` 设置的。 35 可以通过 `vm.overcommit_ratio` (百分比)或 `vm.overcommit_kbytes` (绝对值) 36 来设置超限数量。这些只有在 `vm.overcommit_memory` 被设置为2时才有效果。
|
| /Documentation/gpu/rfc/ |
| D | i915_vm_bind.rst | 9 specified address space (VM). These mappings (also referred to as persistent 18 User has to opt-in for VM_BIND mode of binding for an address space (VM) 19 during VM creation time via I915_VM_CREATE_FLAGS_USE_VM_BIND extension. 38 submissions on that VM and will not be in the working set for currently running 43 A VM in VM_BIND mode will not support older execbuf mode of binding. 56 works with execbuf3 ioctl for submission. All BOs mapped on that VM (through 82 dma-resv fence list of all shared BOs mapped on the VM. 85 is private to a specified VM via I915_GEM_CREATE_EXT_VM_PRIVATE flag during 86 BO creation. Unlike Shared BOs, these VM private BOs can only be mapped on 87 the VM they are private to and can't be dma-buf exported. [all …]
|
| /Documentation/admin-guide/hw-vuln/ |
| D | vmscape.rst | 39 IBPB before the first exit to userspace after VM-exit. If userspace did not run 40 between VM-exit and the next VM-entry, no IBPB is issued. 45 context switch time, while the userspace VMM can run after a VM-exit without a 87 exit to userspace after VM-exit. 91 IBPB is issued on every VM-exit. This occurs when other mitigations like 92 RETBLEED or SRSO are already issuing IBPB on VM-exit.
|
| /Documentation/devicetree/bindings/reserved-memory/ |
| D | xen,shared-memory.txt | 4 virtual machine. Typically, a region is configured at VM creation time 20 memory region used for the mapping in the borrower VM. 24 the VM config file
|
| /Documentation/arch/s390/ |
| D | monreader.rst | 2 Linux API for read access to z/VM Monitor Records 15 usable from user space and allows read access to the z/VM Monitor Records 16 collected by the `*MONITOR` System Service of z/VM. 21 The z/VM guest on which you want to access this API needs to be configured in 25 This item will use the IUCV device driver to access the z/VM services, so you 26 need a kernel with IUCV support. You also need z/VM version 4.4 or 5.1. 78 Refer to the "z/VM Performance" book (SC24-6109-00) on how to create a monitor 79 DCSS if your z/VM doesn't have one already, you need Class E privileges to 147 See "Appendix A: `*MONITOR`" in the "z/VM Performance" document for a description 149 be found here (z/VM 5.1): https://www.vm.ibm.com/pubs/mon510/index.html [all …]
|
| D | 3270.rst | 22 VM-ESA operating system, define a 3270 to your virtual machine by using 31 Your network connection from VM-ESA allows you to use x3270, tn3270, or 56 configuration file under /etc/modprobe.d/. If you are working on a VM 60 other, or neither. If you generate both, the console type under VM is 70 3. (If VM) define devices with DEF GRAF 74 To test that everything works, assuming VM and x3270, 98 with emulated 3270s, as soon as you dial into your vm guest using the 104 3. Define graphic devices to your vm guest machine, if you 149 x3270 vm-esa-domain-name & 155 The screen you should now see contains a VM logo with input [all …]
|
| /Documentation/virt/kvm/arm/ |
| D | vcpu-features.rst | 27 system. The ID register values may be VM-scoped in KVM, meaning that the 28 values could be shared for all vCPUs in a VM. 32 registers are mutable until the VM has started, i.e. userspace has called 33 ``KVM_RUN`` on at least one vCPU in the VM. Userspace can discover what fields
|
| D | pkvm.rst | 19 introduced to manage manipulation of guest stage-2 page tables, creation of VM 31 responsible for managing most of the VM metadata in either case. 56 - Memslot configuration is fixed once a VM has started running, with subsequent 67 VM creation 86 VM runtime
|
| /Documentation/ABI/testing/ |
| D | sysfs-kernel-mm | 3 Contact: Nishanth Aravamudan <nacc@us.ibm.com>, VM maintainers 5 /sys/kernel/mm/ should contain any and all VM
|
| /Documentation/userspace-api/ |
| D | mfd_noexec.rst | 33 - Add a new pid namespace sysctl: vm.memfd_noexec to help applications in 56 ``pid namespaced sysctl vm.memfd_noexec`` 58 The new pid namespaced sysctl vm.memfd_noexec has 3 values: 73 vm.memfd_noexec=1 means the old software will create non-executable memfd 77 The value of vm.memfd_noexec is passed to child namespace at creation
|
| /Documentation/virt/kvm/devices/ |
| D | vfio.rst | 11 Only one VFIO instance may be created per VM. The created device 12 tracks VFIO files (group or device) in use by the VM and features 14 of the VM. As groups/devices are enabled and disabled for use by the 15 VM, KVM should be updated about their presence. When registered with 66 KVM VM associated with this device, returns a file descriptor "pviommufd"
|
| /Documentation/mm/ |
| D | overcommit-accounting.rst | 31 The overcommit policy is set via the sysctl ``vm.overcommit_memory``. 33 The overcommit amount can be set via ``vm.overcommit_ratio`` (percentage) 34 or ``vm.overcommit_kbytes`` (absolute value). These only have an effect 35 when ``vm.overcommit_memory`` is set to 2.
|
| /Documentation/virt/kvm/ |
| D | api.rst | 17 - VM ioctls: These query and set attributes that affect an entire virtual 18 machine, for example memory layout. In addition a VM ioctl is used to 21 VM ioctls must be issued from the same process (address space) that was 22 used to create the VM. 36 was used to create the VM. 44 handle will create a VM file descriptor which can be used to issue VM 45 ioctls. A KVM_CREATE_VCPU or KVM_CREATE_DEVICE ioctl on a VM fd will 58 It is important to note that although VM ioctls may only be issued from 59 the process that created the VM, a VM's lifecycle is associated with its 60 file descriptor, not its creator (process). In other words, the VM and [all …]
|
| /Documentation/devicetree/bindings/power/supply/ |
| D | qcom,pm8916-bms-vm.yaml | 4 $id: http://devicetree.org/schemas/power/supply/qcom,pm8916-bms-vm.yaml# 21 const: qcom,pm8916-bms-vm 65 compatible = "qcom,pm8916-bms-vm";
|
| /Documentation/security/ |
| D | snp-tdx-threat-model.rst | 33 Machines (VM) inside TEE. From now on in this document will be referring 39 inside a CoCo VM. Namely, confidential computing allows its users to 46 integrity for the VM's guest memory and execution state (vCPU registers), 51 `AMD Memory Encryption <https://www.amd.com/system/files/techdocs/sev-snp-strengthening-vm-isolatio… 55 a trusted intermediary between the guest VM and the underlying platform 59 VM, manage its access to system resources, etc. However, since it 60 typically stays out of CoCo VM TCB, its access is limited to preserve the 68 | CoCo guest VM |<---->| | 131 CoCo VM TCB due to its large SW attack surface. It is important to note 134 VM TCB. This new type of adversary may be viewed as a more powerful type [all …]
|