• Home
  • Raw
  • Download

Lines Matching +full:indexed +full:- +full:array

1 .. SPDX-License-Identifier: GPL-2.0
19 These APIs use the device DMA attributes and kernel-wide settings to determine
30 ---------------
33 only provide 32-bit DMA addresses. By allocating bounce buffer memory below
40 directed to guest memory that is unencrypted. CoCo VMs set a kernel-wide option
54 IOMMU access control is per-granule, the untrusted device can gain access to
60 ------------------
85 ------------------------------
89 pre-allocated at boot time (but see Dynamic swiotlb below). Because swiotlb
93 The need to pre-allocate the default swiotlb pool creates a boot-time tradeoff.
95 always be satisfied, as the non-blocking requirement means requests can't wait
97 this pre-allocated memory is not available for other uses in the system. The
109 must be limited to that 256 KiB. This value is communicated to higher-level
111 higher-level code fails to account for this limit, it may make requests that
118 min_align_mask is non-zero, it may produce an "alignment offset" in the address
124 swiotlb, max_sectors_kb will be 256 KiB. When min_align_mask is non-zero,
130 bounce buffer might start at a larger address if min_align_mask is non-zero.
131 Hence there may be pre-padding space that is allocated prior to the start of
133 alloc_align_mask boundary, potentially resulting in post-padding space. Any
134 pre-padding or post-padding space is not initialized by swiotlb code. The
136 devices. It is set to the granule size - 1 so that the bounce buffer is
140 ------------------------
149 it works for devices that can only address 32-bits of physical memory (unless
150 architecture-specific code provides the SWIOTLB_ANY flag). In a CoCo VM, the
159 IO_TLB_SEGSIZE. Multiple smaller bounce buffers may co-exist in a single slot
194 ---------------
195 When CONFIG_SWIOTLB_DYNAMIC is enabled, swiotlb can do on-demand expansion of
208 background task can add another non-transient pool.
236 ----------------------
246 the memory in the pool, a pointer to an array of io_tlb_area structures, and a
247 pointer to an array of io_tlb_slot structures that are associated with the pool.
250 serialize access to slots in the area. The io_tlb_area array for a pool has an
251 entry for each area, and is accessed using a 0-based area index derived from the
256 IO_TLB_SIZE (2 KiB currently). The io_tlb_slot array is indexed by the slot
261 The io_tlb_slot array is designed to meet several requirements. First, the DMA
270 io_tlb_slot array.
272 Second, the io_tlb_slot array must handle partial sync requests. In such cases,
284 Third, the io_tlb_slot array is used to track available slots. The "list" field
296 Fourth, the io_tlb_slot array keeps track of any "padding slots" allocated to
299 requirements, it may allocate pre-padding space across zero or more slots. But
304 The "pad_slots" value is recorded only in the first non-padding slot allocated
308 ----------------