Lines Matching +full:iommu +full:- +full:map +full:- +full:mask
10 with example pseudo-code. For a concise description of the API, see
11 Documentation/core-api/dma-api.rst.
27 address is not directly useful to a driver; it must use ioremap() to map
39 supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
40 so devices only need to use 32-bit DMA addresses.
49 +-------+ +------+ +------+
52 C +-------+ --------> B +------+ ----------> +------+ A
54 +-----+ | | | | bridge | | +--------+
55 | | | | +------+ | | | |
58 +-----+ +-------+ +------+ +------+ +--------+
60 X +-------+ --------> Y +------+ <---------- +------+ Z
61 | | mapping | RAM | by IOMMU
64 +-------+ +------+
71 driver claims a device, it typically uses ioremap() to map physical address
82 Y. But in many others, there is IOMMU hardware that translates DMA
85 an interface like dma_map_single(), which sets up any required IOMMU
87 do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
100 bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
105 #include <linux/dma-mapping.h>
137 buffers were cacheline-aligned. Without that, you'd see cacheline
138 sharing problems (data corruption) on CPUs with DMA-incoherent caches.
152 By default, the kernel assumes that your device can address 32-bits of DMA
153 addressing. For a 64-bit capable device, this needs to be increased, and for
156 Special note about PCI: PCI-X specification requires PCI-X devices to support
157 64-bit addressing (DAC) for all transactions. And at least one platform (SGI
158 SN2) requires 64-bit consistent allocations to operate correctly when the IO
159 bus is in PCI-X mode.
161 For correct operation, you must set the DMA mask to inform the kernel about
166 int dma_set_mask_and_coherent(struct device *dev, u64 mask);
168 which will set the mask for both streaming and coherent APIs together. If you
175 int dma_set_mask(struct device *dev, u64 mask);
180 int dma_set_coherent_mask(struct device *dev, u64 mask);
182 Here, dev is a pointer to the device struct of your device, and mask is a bit
183 mask describing which bits of an address your device supports. Often the
184 device struct of your device is embedded in the bus-specific device struct of
185 your device. For example, &pdev->dev is a pointer to the device struct of a
189 properly on the machine given the address mask you provided, but they might
190 return an error if the mask is too small to be supportable on the given
191 system. If it returns non-zero, your device cannot perform DMA properly on
198 1) Use some non-DMA mode for data transfer, if possible.
202 setting the DMA mask fails. In this manner, if a user of your driver reports
206 The 24-bit addressing device would do something like this::
213 The standard 64-bit addressing device would do something like this::
233 If the device only supports 32-bit addressing for descriptors in the
234 coherent allocations, but supports full 64-bits for streaming mappings
242 The coherent mask will always be able to set the same or a smaller mask as
243 the streaming mask. However for the rare case that a device driver only
247 Finally, if your device can only drive the low 24-bits of
251 dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
256 returns zero, the kernel saves away this mask you have provided. The
263 DMA addressing limitations, you may wish to probe each mask and
266 most specific mask.
268 Here is pseudo-code showing how this might be done::
278 card->playback_enabled = 1;
280 card->playback_enabled = 0;
282 card->name);
285 card->record_enabled = 1;
287 card->record_enabled = 0;
289 card->name);
301 - Consistent DMA mappings which are usually mapped at driver
311 set the consistent mask even if this default is fine for your
316 - Network card DMA ring descriptors.
317 - SCSI adapter mailbox command data structures.
318 - Device firmware microcode executed out of
334 desc->word0 = address;
336 desc->word1 = DESC_VALID;
345 - Streaming DMA mappings which are usually mapped for one DMA
354 - Networking buffers transmitted/received by a device.
355 - Filesystem buffers written/read by a SCSI device.
364 Also, systems with caches that aren't DMA-coherent will work better
371 To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
389 which is 32-bit addressable. Even if the device indicates (via the DMA mask)
390 that it may address the upper 32-bits, consistent allocation will only
391 return > 32-bit addresses for DMA if the consistent DMA mask has been
493 potential platform-specific optimizations of such) is for debugging.
509 packets, map/unmap them with the DMA_TO_DEVICE direction
510 specifier. For receive packets, just the opposite, map/unmap them
517 context. There are two versions of each map/unmap, one which will
518 map/unmap a single memory region, and one which will map/unmap a
521 To map a single region, you do::
523 struct device *dev = &my_dev->dev;
525 void *addr = buffer->ptr;
526 size_t size = buffer->len;
554 map/unmap interface pair akin to dma_{map,unmap}_single(). These
558 struct device *dev = &my_dev->dev;
560 struct page *page = buffer->page;
561 unsigned long offset = buffer->offset;
562 size_t size = buffer->len;
586 With scatterlists, you map a region gathered from several regions by::
601 ends and the second one starts on a page boundary - in fact this is a huge
602 advantage for cards which either cannot do scatter-gather or have very
603 limited number of scatter-gather entries) and returns the actual number
608 accessed sg->address and sg->length as shown above.
629 properly in order for the CPU and device to see the most up-to-date and
632 So, firstly, just map it with dma_map_{single,sg}(), and after each DMA
674 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
675 if (dma_mapping_error(cp->dev, mapping)) {
684 cp->rx_buf = buffer;
685 cp->rx_len = len;
686 cp->rx_dma = mapping;
706 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
707 cp->rx_len,
711 hp = (struct my_card_header *) cp->rx_buf;
713 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
715 pass_to_upper_layers(cp->rx_buf);
719 * DMA_FROM_DEVICE-mapped area,
736 - checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
738 - checking the dma_addr_t returned from dma_map_single() and dma_map_page()
753 - unmap pages that are already mapped, when mapping error occurs in the middle
867 ringp->mapping = FOO;
868 ringp->len = BAR;
878 dma_unmap_single(dev, ringp->mapping, ringp->len,
888 It really should be self-explanatory. We treat the ADDR and LEN
902 supports IOMMUs (including software IOMMU).
907 DMA-safe. Drivers and subsystems depend on it. If an architecture
908 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
916 alignment constraints (e.g. the alignment constraints about 64-bit
935 David Mosberger-Tang <davidm@hpl.hp.com>