• Home
  • Raw
  • Download

Lines Matching +full:data +full:- +full:size

1 .. SPDX-License-Identifier: GPL-2.0
12 the CPU copies the data between the temporary buffer and the original target
19 These APIs use the device DMA attributes and kernel-wide settings to determine
24 Because the CPU copies data between the bounce buffer and the original target
30 ---------------
33 only provide 32-bit DMA addresses. By allocating bounce buffer memory below
40 directed to guest memory that is unencrypted. CoCo VMs set a kernel-wide option
44 data to/from the original target memory buffer. The CPU copying bridges between
52 the data being transferred. But if that memory occupies only part of an IOMMU
53 granule, other parts of the granule may contain unrelated kernel data. Since
54 IOMMU access control is per-granule, the untrusted device can gain access to
55 the unrelated kernel data. This problem is solved by bounce buffering the DMA
57 contain any unrelated kernel data.
60 ------------------
63 specified size in bytes and returns the physical address of the buffer. The
74 unmap does a "sync" operation to cause a CPU copy of the data from the bounce
79 device. The swiotlb "sync" APIs cause a CPU copy of the data between the
85 ------------------------------
89 pre-allocated at boot time (but see Dynamic swiotlb below). Because swiotlb
93 The need to pre-allocate the default swiotlb pool creates a boot-time tradeoff.
95 always be satisfied, as the non-blocking requirement means requests can't wait
97 this pre-allocated memory is not available for other uses in the system. The
99 I/O. These VMs use a heuristic to set the default pool size to ~6% of memory,
101 Conversely, the heuristic might produce a size that is insufficient, depending
104 default memory pool size remains an open issue.
108 are such that the device might use swiotlb, the maximum size of a DMA segment
109 must be limited to that 256 KiB. This value is communicated to higher-level
111 higher-level code fails to account for this limit, it may make requests that
118 min_align_mask is non-zero, it may produce an "alignment offset" in the address
119 of the bounce buffer that slightly reduces the maximum size of an allocation.
124 swiotlb, max_sectors_kb will be 256 KiB. When min_align_mask is non-zero,
130 bounce buffer might start at a larger address if min_align_mask is non-zero.
131 Hence there may be pre-padding space that is allocated prior to the start of
133 alloc_align_mask boundary, potentially resulting in post-padding space. Any
134 pre-padding or post-padding space is not initialized by swiotlb code. The
136 devices. It is set to the granule size - 1 so that the bounce buffer is
139 Data structures concepts
140 ------------------------
143 default size of 64 MiB. The default pool size may be modified with the
144 "swiotlb=" kernel boot line parameter. The default size may also be adjusted
149 it works for devices that can only address 32-bits of physical memory (unless
150 architecture-specific code provides the SWIOTLB_ANY flag). In a CoCo VM, the
153 Each pool is divided into "slots" of size IO_TLB_SIZE, which is 2 KiB with
158 slot set, which leads to the maximum bounce buffer size being IO_TLB_SIZE *
159 IO_TLB_SEGSIZE. Multiple smaller bounce buffers may co-exist in a single slot
160 set if the alignment and size constraints can be met.
187 those initial slots effectively reduces the max size of a bounce buffer.
189 granule size, and granules cannot be larger than PAGE_SIZE. But if that were to
194 ---------------
195 When CONFIG_SWIOTLB_DYNAMIC is enabled, swiotlb can do on-demand expansion of
203 error. A transient pool has the size of the bounce buffer request, and is
208 background task can add another non-transient pool.
211 must be physically contiguous, so the size is limited to MAX_PAGE_ORDER pages
212 (e.g., 4 MiB on a typical x86 system). Due to memory fragmentation, a max size
214 until it succeeds, but with a minimum size of 1 MiB. Given sufficient system
218 in the default pool. Because the new pool size is typically a few MiB at most,
219 the number of areas will likely be smaller. For example, with a new pool size
220 of 4 MiB and the 256 KiB minimum area size, only 16 areas can be created. If
227 large number of dynamic pools. The data structures could be improved for
235 Data Structure Details
236 ----------------------
237 swiotlb is managed with four primary data structures: io_tlb_mem, io_tlb_pool,
241 and are stored in this data structure. These statistics are available under
251 entry for each area, and is accessed using a 0-based area index derived from the
255 io_tlb_slot describes an individual memory slot in the pool, with size
258 address of the pool. The size of struct io_tlb_slot is 24 bytes, so the
259 overhead is about 1% of the slot size.
268 swiotlb data structures must save the original memory buffer address so that it
281 the size of the "sync" operation. The "alloc_size" field is not used except for
299 requirements, it may allocate pre-padding space across zero or more slots. But
304 The "pad_slots" value is recorded only in the first non-padding slot allocated
308 ----------------
315 on its own io_tlb_mem data structure that is independent of the main swiotlb