Lines Matching +full:len +full:- +full:or +full:- +full:define
10 with example pseudo-code. For a concise description of the API, see
11 Documentation/core-api/dma-api.rst.
24 addresses to CPU physical addresses, which are stored as "phys_addr_t" or
31 registers at an MMIO address, or if it performs DMA to read or write system
39 supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
40 so devices only need to use 32-bit DMA addresses.
49 +-------+ +------+ +------+
52 C +-------+ --------> B +------+ ----------> +------+ A
54 +-----+ | | | | bridge | | +--------+
55 | | | | +------+ | | | |
58 +-----+ +-------+ +------+ +------+ +--------+
60 X +-------+ --------> Y +------+ <---------- +------+ Z
64 +-------+ +------+
75 If the device supports DMA, the driver sets up a buffer using kmalloc() or
100 bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
105 #include <linux/dma-mapping.h>
120 (i.e. __get_free_page*()) or the generic memory allocators
121 (i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
137 buffers were cacheline-aligned. Without that, you'd see cacheline
138 sharing problems (data corruption) on CPUs with DMA-incoherent caches.
152 By default, the kernel assumes that your device can address 32-bits of DMA
153 addressing. For a 64-bit capable device, this needs to be increased, and for
156 Special note about PCI: PCI-X specification requires PCI-X devices to support
157 64-bit addressing (DAC) for all transactions. And at least one platform (SGI
158 SN2) requires 64-bit consistent allocations to operate correctly when the IO
159 bus is in PCI-X mode.
184 device struct of your device is embedded in the bus-specific device struct of
185 your device. For example, &pdev->dev is a pointer to the device struct of a
191 system. If it returns non-zero, your device cannot perform DMA properly on
198 1) Use some non-DMA mode for data transfer, if possible.
203 that performance is bad or that the device is not even detected, you can ask
206 The 24-bit addressing device would do something like this::
213 The standard 64-bit addressing device would do something like this::
233 If the device only supports 32-bit addressing for descriptors in the
234 coherent allocations, but supports full 64-bits for streaming mappings
242 The coherent mask will always be able to set the same or a smaller mask as
247 Finally, if your device can only drive the low 24-bits of
251 dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
255 When dma_set_mask() or dma_set_mask_and_coherent() is successful, and
268 Here is pseudo-code showing how this might be done::
270 #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32)
271 #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24)
278 card->playback_enabled = 1;
280 card->playback_enabled = 0;
282 card->name);
285 card->record_enabled = 1;
287 card->record_enabled = 0;
289 card->name);
301 - Consistent DMA mappings which are usually mapped at driver
307 Think of "consistent" as "synchronous" or "coherent".
316 - Network card DMA ring descriptors.
317 - SCSI adapter mailbox command data structures.
318 - Device firmware microcode executed out of
334 desc->word0 = address;
336 desc->word1 = DESC_VALID;
345 - Streaming DMA mappings which are usually mapped for one DMA
349 Think of "streaming" as "asynchronous" or "outside the coherency
354 - Networking buffers transmitted/received by a device.
355 - Filesystem buffers written/read by a SCSI device.
364 Also, systems with caches that aren't DMA-coherent will work better
371 To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
389 which is 32-bit addressable. Even if the device indicates (via the DMA mask)
390 that it may address the upper 32-bits, consistent allocation will only
391 return > 32-bit addresses for DMA if the consistent DMA mask has been
401 is greater than or equal to the requested size. This invariant
403 which is smaller than or equal to 64 kilobytes, the extent of the
416 or you can use the dma_pool API to do that. A dma_pool is like
493 potential platform-specific optimizations of such) is for debugging.
523 struct device *dev = &my_dev->dev;
525 void *addr = buffer->ptr;
526 size_t size = buffer->len;
532 * delay and try again later or
558 struct device *dev = &my_dev->dev;
560 struct page *page = buffer->page;
561 unsigned long offset = buffer->offset;
562 size_t size = buffer->len;
568 * delay and try again later or
601 ends and the second one starts on a page boundary - in fact this is a huge
602 advantage for cards which either cannot do scatter-gather or have very
603 limited number of scatter-gather entries) and returns the actual number
608 accessed sg->address and sg->length as shown above.
629 properly in order for the CPU and device to see the most up-to-date and
637 or::
649 or::
670 my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
674 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
675 if (dma_mapping_error(cp->dev, mapping)) {
678 * delay and try again later or
684 cp->rx_buf = buffer;
685 cp->rx_len = len;
686 cp->rx_dma = mapping;
706 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
707 cp->rx_len,
711 hp = (struct my_card_header *) cp->rx_buf;
713 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
715 pass_to_upper_layers(cp->rx_buf);
719 * DMA_FROM_DEVICE-mapped area,
736 - checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
738 - checking the dma_addr_t returned from dma_map_single() and dma_map_page()
747 * delay and try again later or
753 - unmap pages that are already mapped, when mapping error occurs in the middle
766 * delay and try again later or
775 * delay and try again later or
806 * delay and try again later or
847 1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
853 __u32 len;
861 DEFINE_DMA_UNMAP_LEN(len);
864 2) Use dma_unmap_{addr,len}_set() to set these values.
867 ringp->mapping = FOO;
868 ringp->len = BAR;
873 dma_unmap_len_set(ringp, len, BAR);
875 3) Use dma_unmap_{addr,len}() to access these values.
878 dma_unmap_single(dev, ringp->mapping, ringp->len,
885 dma_unmap_len(ringp, len),
888 It really should be self-explanatory. We treat the ADDR and LEN
907 DMA-safe. Drivers and subsystems depend on it. If an architecture
908 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
916 alignment constraints (e.g. the alignment constraints about 64-bit
935 David Mosberger-Tang <davidm@hpl.hp.com>