Lines Matching +full:data +full:- +full:mapping
2 Dynamic DMA mapping Guide
10 with example pseudo-code. For a concise description of the API, see
11 DMA-API.txt.
39 supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
40 so devices only need to use 32-bit DMA addresses.
49 +-------+ +------+ +------+
52 C +-------+ --------> B +------+ ----------> +------+ A
53 | | mapping | | by host | |
54 +-----+ | | | | bridge | | +--------+
55 | | | | +------+ | | | |
58 +-----+ +-------+ +------+ +------+ +--------+
59 | | Virtual |Buffer| Mapping | |
60 X +-------+ --------> Y +------+ <---------- +------+ Z
61 | | mapping | RAM | by IOMMU
64 +-------+ +------+
86 mapping and returns the DMA address Z. The driver then tells the device to
90 So that Linux can use the dynamic DMA mapping, it needs some help from the
100 bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
105 #include <linux/dma-mapping.h>
109 everywhere you hold a DMA address returned from the DMA mapping functions.
115 be used with the DMA mapping facilities. There has been an unwritten
133 (items in data/text/bss segments), nor module image addresses, nor
137 buffers were cacheline-aligned. Without that, you'd see cacheline
138 sharing problems (data corruption) on CPUs with DMA-incoherent caches.
153 your device only capable of driving the low order 24-bits of address?
157 32-bits. For a 64-bit capable device, this needs to be increased.
161 Special note about PCI: PCI-X specification requires PCI-X devices to
162 support 64-bit addressing (DAC) for all transactions. And at least
163 one platform (SGI SN2) requires 64-bit consistent allocations to
164 operate correctly when the IO bus is in PCI-X mode.
195 device struct of your device is embedded in the bus-specific device
196 struct of your device. For example, &pdev->dev is a pointer to the
200 If it returns non-zero, your device cannot perform DMA properly on
207 2) Use some non-DMA mode for data transfer, if possible.
216 The standard 32-bit addressing device would do something like this::
223 Another common scenario is a 64-bit capable device. The approach here
224 is to try for 64-bit addressing, but back down to a 32-bit mask that
225 should not fail. The kernel may fail the 64-bit mask not because the
226 platform is not capable of 64-bit addressing. Rather, it may fail in
227 this case simply because 32-bit addressing is done more efficiently
228 than 64-bit addressing. For example, Sparc64 PCI SAC addressing is
231 Here is how you would handle a 64-bit capable device which can drive
232 all 64-bits when accessing streaming DMA::
245 If a card is capable of using 64-bit consistent allocations as well,
266 Finally, if your device can only drive the low 24-bits of
270 dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
287 Here is pseudo-code showing how this might be done::
297 card->playback_enabled = 1;
299 card->playback_enabled = 0;
301 card->name);
304 card->record_enabled = 1;
306 card->record_enabled = 0;
308 card->name);
320 - Consistent DMA mappings which are usually mapped at driver
322 guarantee that the device and the CPU can access the data
335 - Network card DMA ring descriptors.
336 - SCSI adapter mailbox command data structures.
337 - Device firmware microcode executed out of
353 desc->word0 = address;
355 desc->word1 = DESC_VALID;
364 - Streaming DMA mappings which are usually mapped for one DMA
373 - Networking buffers transmitted/received by a device.
374 - Filesystem buffers written/read by a SCSI device.
376 The interfaces for using this type of mapping were designed in
381 Neither type of DMA mapping has alignment restrictions that come from
383 Also, systems with caches that aren't DMA-coherent will work better
384 when the underlying buffers don't share cache lines with other data.
407 The consistent DMA mapping interfaces, for non-NULL dev, will by
408 default return a DMA address which is 32-bit addressable. Even if the
409 device indicates (via DMA mask) that it may address the upper 32-bits,
410 consistent allocation will only return > 32-bit addresses for DMA if
449 type of data is "align" (which is expressed in bytes, and must be a
495 It is the direction in which the data moves during the DMA
508 hold this in a data structure before you come to know the
513 potential platform-specific optimizations of such) is for debugging.
536 The streaming DMA mapping routines can be called from interrupt
543 struct device *dev = &my_dev->dev;
545 void *addr = buffer->ptr;
546 size_t size = buffer->len;
551 * reduce current DMA mapping usage,
563 error. Doing so will ensure that the mapping code will work correctly on all
566 result in failures ranging from panics to silent data corruption. The same
578 struct device *dev = &my_dev->dev;
580 struct page *page = buffer->page;
581 unsigned long offset = buffer->offset;
582 size_t size = buffer->len;
587 * reduce current DMA mapping usage,
619 into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
621 ends and the second one starts on a page boundary - in fact this is a huge
622 advantage for cards which either cannot do scatter-gather or have very
623 limited number of scatter-gather entries) and returns the actual number
628 accessed sg->address and sg->length as shown above.
648 the data in between the DMA transfers, the buffer needs to be synced
649 properly in order for the CPU and device to see the most up-to-date and
664 finish accessing the data with the CPU, and then before actually
683 dma_unmap_{single,sg}(). If you don't touch the data from the first
692 dma_addr_t mapping;
694 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
695 if (dma_mapping_error(cp->dev, mapping)) {
697 * reduce current DMA mapping usage,
704 cp->rx_buf = buffer;
705 cp->rx_len = len;
706 cp->rx_dma = mapping;
722 * to accept the data. But synchronize
726 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
727 cp->rx_len,
731 hp = (struct my_card_header *) cp->rx_buf;
733 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
735 pass_to_upper_layers(cp->rx_buf);
739 * DMA_FROM_DEVICE-mapped area,
742 * for DMA_BIDIRECTIONAL mapping if
753 dynamic DMA mapping scheme - you have to always store the DMA addresses
756 supports dynamic DMA mapping in hardware) in your driver structures and/or
770 - checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
772 - checking the dma_addr_t returned from dma_map_single() and dma_map_page()
780 * reduce current DMA mapping usage,
787 - unmap pages that are already mapped, when mapping error occurs in the middle
788 of a multiple page mapping attempt. These example are applicable to
799 * reduce current DMA mapping usage,
808 * reduce current DMA mapping usage,
825 * mapping error is detected in the middle
839 * reduce current DMA mapping usage,
861 and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
865 SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
873 Therefore, keeping track of the mapping address and length is a waste
886 dma_addr_t mapping;
894 DEFINE_DMA_UNMAP_ADDR(mapping);
901 ringp->mapping = FOO;
902 ringp->len = BAR;
906 dma_unmap_addr_set(ringp, mapping, FOO);
912 dma_unmap_single(dev, ringp->mapping, ringp->len,
918 dma_unmap_addr(ringp, mapping),
922 It really should be self-explanatory. We treat the ADDR and LEN
941 DMA-safe. Drivers and subsystems depend on it. If an architecture
942 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
943 the CPU cache is identical to data in main memory),
949 constraints. You don't need to worry about the architecture data
950 alignment constraints (e.g. the alignment constraints about 64-bit
969 David Mosberger-Tang <davidm@hpl.hp.com>