Lines Matching +full:sync +full:- +full:1
1 .. SPDX-License-Identifier: GPL-2.0
18 the normal DMA map, unmap, and sync APIs when programming a device to do DMA.
19 These APIs use the device DMA attributes and kernel-wide settings to determine
21 freeing, and sync'ing of bounce buffers. Since the DMA attributes are per
30 ---------------
33 only provide 32-bit DMA addresses. By allocating bounce buffer memory below
40 directed to guest memory that is unencrypted. CoCo VMs set a kernel-wide option
43 the Linux kernel DMA layer does "sync" operations to cause the CPU to copy the
54 IOMMU access control is per-granule, the untrusted device can gain access to
60 ------------------
68 each segment. swiotlb_tbl_map_single() always does a "sync" operation (i.e., a
74 unmap does a "sync" operation to cause a CPU copy of the data from the bounce
77 swiotlb also provides "sync" APIs that correspond to the dma_sync_*() APIs that
79 device. The swiotlb "sync" APIs cause a CPU copy of the data between the
81 "sync" APIs support doing a partial sync, where only a subset of the bounce
85 ------------------------------
86 The swiotlb map/unmap/sync APIs must operate without blocking, as they are
89 pre-allocated at boot time (but see Dynamic swiotlb below). Because swiotlb
93 The need to pre-allocate the default swiotlb pool creates a boot-time tradeoff.
95 always be satisfied, as the non-blocking requirement means requests can't wait
97 this pre-allocated memory is not available for other uses in the system. The
100 with a max of 1 GiB, which has the potential to be very wasteful of memory.
109 must be limited to that 256 KiB. This value is communicated to higher-level
111 higher-level code fails to account for this limit, it may make requests that
114 A key device DMA setting is "min_align_mask", which is a power of 2 minus 1
118 min_align_mask is non-zero, it may produce an "alignment offset" in the address
124 swiotlb, max_sectors_kb will be 256 KiB. When min_align_mask is non-zero,
130 bounce buffer might start at a larger address if min_align_mask is non-zero.
131 Hence there may be pre-padding space that is allocated prior to the start of
133 alloc_align_mask boundary, potentially resulting in post-padding space. Any
134 pre-padding or post-padding space is not initialized by swiotlb code. The
136 devices. It is set to the granule size - 1 so that the bounce buffer is
140 ------------------------
149 it works for devices that can only address 32-bits of physical memory (unless
150 architecture-specific code provides the SWIOTLB_ANY flag). In a CoCo VM, the
159 IO_TLB_SEGSIZE. Multiple smaller bounce buffers may co-exist in a single slot
194 ---------------
195 When CONFIG_SWIOTLB_DYNAMIC is enabled, swiotlb can do on-demand expansion of
208 background task can add another non-transient pool.
214 until it succeeds, but with a minimum size of 1 MiB. Given sufficient system
236 ----------------------
251 entry for each area, and is accessed using a 0-based area index derived from the
259 overhead is about 1% of the slot size.
269 can be used when doing sync operations. This original address is saved in the
272 Second, the io_tlb_slot array must handle partial sync requests. In such cases,
277 address to do the CPU copy dictated by the "sync". So an adjusted original
281 the size of the "sync" operation. The "alloc_size" field is not used except for
286 at that slot. A "0" indicates that the slot is occupied. A value of "1"
293 "list" field is initialized to IO_TLB_SEGSIZE down to 1 for the slots in every
299 requirements, it may allocate pre-padding space across zero or more slots. But
304 The "pad_slots" value is recorded only in the first non-padding slot allocated
308 ----------------