• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1               Dynamic DMA mapping using the generic device
2               ============================================
3
4        James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
5
6This document describes the DMA API.  For a more gentle introduction
7phrased in terms of the pci_ equivalents (and actual examples) see
8Documentation/PCI/PCI-DMA-mapping.txt.
9
10This API is split into two pieces.  Part I describes the API and the
11corresponding pci_ API.  Part II describes the extensions to the API
12for supporting non-consistent memory machines.  Unless you know that
13your driver absolutely has to support non-consistent platforms (this
14is usually only legacy platforms) you should only use the API
15described in part I.
16
17Part I - pci_ and dma_ Equivalent API
18-------------------------------------
19
20To get the pci_ API, you must #include <linux/pci.h>
21To get the dma_ API, you must #include <linux/dma-mapping.h>
22
23
24Part Ia - Using large dma-coherent buffers
25------------------------------------------
26
27void *
28dma_alloc_coherent(struct device *dev, size_t size,
29			     dma_addr_t *dma_handle, gfp_t flag)
30void *
31pci_alloc_consistent(struct pci_dev *dev, size_t size,
32			     dma_addr_t *dma_handle)
33
34Consistent memory is memory for which a write by either the device or
35the processor can immediately be read by the processor or device
36without having to worry about caching effects.  (You may however need
37to make sure to flush the processor's write buffers before telling
38devices to read that memory.)
39
40This routine allocates a region of <size> bytes of consistent memory.
41It also returns a <dma_handle> which may be cast to an unsigned
42integer the same width as the bus and used as the physical address
43base of the region.
44
45Returns: a pointer to the allocated region (in the processor's virtual
46address space) or NULL if the allocation failed.
47
48Note: consistent memory can be expensive on some platforms, and the
49minimum allocation length may be as big as a page, so you should
50consolidate your requests for consistent memory as much as possible.
51The simplest way to do that is to use the dma_pool calls (see below).
52
53The flag parameter (dma_alloc_coherent only) allows the caller to
54specify the GFP_ flags (see kmalloc) for the allocation (the
55implementation may choose to ignore flags that affect the location of
56the returned memory, like GFP_DMA).  For pci_alloc_consistent, you
57must assume GFP_ATOMIC behaviour.
58
59void
60dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
61			   dma_addr_t dma_handle)
62void
63pci_free_consistent(struct pci_dev *dev, size_t size, void *cpu_addr,
64			   dma_addr_t dma_handle)
65
66Free the region of consistent memory you previously allocated.  dev,
67size and dma_handle must all be the same as those passed into the
68consistent allocate.  cpu_addr must be the virtual address returned by
69the consistent allocate.
70
71Note that unlike their sibling allocation calls, these routines
72may only be called with IRQs enabled.
73
74
75Part Ib - Using small dma-coherent buffers
76------------------------------------------
77
78To get this part of the dma_ API, you must #include <linux/dmapool.h>
79
80Many drivers need lots of small dma-coherent memory regions for DMA
81descriptors or I/O buffers.  Rather than allocating in units of a page
82or more using dma_alloc_coherent(), you can use DMA pools.  These work
83much like a struct kmem_cache, except that they use the dma-coherent allocator,
84not __get_free_pages().  Also, they understand common hardware constraints
85for alignment, like queue heads needing to be aligned on N-byte boundaries.
86
87
88	struct dma_pool *
89	dma_pool_create(const char *name, struct device *dev,
90			size_t size, size_t align, size_t alloc);
91
92	struct pci_pool *
93	pci_pool_create(const char *name, struct pci_device *dev,
94			size_t size, size_t align, size_t alloc);
95
96The pool create() routines initialize a pool of dma-coherent buffers
97for use with a given device.  It must be called in a context which
98can sleep.
99
100The "name" is for diagnostics (like a struct kmem_cache name); dev and size
101are like what you'd pass to dma_alloc_coherent().  The device's hardware
102alignment requirement for this type of data is "align" (which is expressed
103in bytes, and must be a power of two).  If your device has no boundary
104crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
105from this pool must not cross 4KByte boundaries.
106
107
108	void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
109			dma_addr_t *dma_handle);
110
111	void *pci_pool_alloc(struct pci_pool *pool, gfp_t gfp_flags,
112			dma_addr_t *dma_handle);
113
114This allocates memory from the pool; the returned memory will meet the size
115and alignment requirements specified at creation time.  Pass GFP_ATOMIC to
116prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks),
117pass GFP_KERNEL to allow blocking.  Like dma_alloc_coherent(), this returns
118two values:  an address usable by the cpu, and the dma address usable by the
119pool's device.
120
121
122	void dma_pool_free(struct dma_pool *pool, void *vaddr,
123			dma_addr_t addr);
124
125	void pci_pool_free(struct pci_pool *pool, void *vaddr,
126			dma_addr_t addr);
127
128This puts memory back into the pool.  The pool is what was passed to
129the pool allocation routine; the cpu (vaddr) and dma addresses are what
130were returned when that routine allocated the memory being freed.
131
132
133	void dma_pool_destroy(struct dma_pool *pool);
134
135	void pci_pool_destroy(struct pci_pool *pool);
136
137The pool destroy() routines free the resources of the pool.  They must be
138called in a context which can sleep.  Make sure you've freed all allocated
139memory back to the pool before you destroy it.
140
141
142Part Ic - DMA addressing limitations
143------------------------------------
144
145int
146dma_supported(struct device *dev, u64 mask)
147int
148pci_dma_supported(struct pci_dev *hwdev, u64 mask)
149
150Checks to see if the device can support DMA to the memory described by
151mask.
152
153Returns: 1 if it can and 0 if it can't.
154
155Notes: This routine merely tests to see if the mask is possible.  It
156won't change the current mask settings.  It is more intended as an
157internal API for use by the platform than an external API for use by
158driver writers.
159
160int
161dma_set_mask(struct device *dev, u64 mask)
162int
163pci_set_dma_mask(struct pci_device *dev, u64 mask)
164
165Checks to see if the mask is possible and updates the device
166parameters if it is.
167
168Returns: 0 if successful and a negative error if not.
169
170u64
171dma_get_required_mask(struct device *dev)
172
173This API returns the mask that the platform requires to
174operate efficiently.  Usually this means the returned mask
175is the minimum required to cover all of memory.  Examining the
176required mask gives drivers with variable descriptor sizes the
177opportunity to use smaller descriptors as necessary.
178
179Requesting the required mask does not alter the current mask.  If you
180wish to take advantage of it, you should issue a dma_set_mask()
181call to set the mask to the value returned.
182
183
184Part Id - Streaming DMA mappings
185--------------------------------
186
187dma_addr_t
188dma_map_single(struct device *dev, void *cpu_addr, size_t size,
189		      enum dma_data_direction direction)
190dma_addr_t
191pci_map_single(struct pci_dev *hwdev, void *cpu_addr, size_t size,
192		      int direction)
193
194Maps a piece of processor virtual memory so it can be accessed by the
195device and returns the physical handle of the memory.
196
197The direction for both api's may be converted freely by casting.
198However the dma_ API uses a strongly typed enumerator for its
199direction:
200
201DMA_NONE		= PCI_DMA_NONE		no direction (used for
202						debugging)
203DMA_TO_DEVICE		= PCI_DMA_TODEVICE	data is going from the
204						memory to the device
205DMA_FROM_DEVICE		= PCI_DMA_FROMDEVICE	data is coming from
206						the device to the
207						memory
208DMA_BIDIRECTIONAL	= PCI_DMA_BIDIRECTIONAL	direction isn't known
209
210Notes:  Not all memory regions in a machine can be mapped by this
211API.  Further, regions that appear to be physically contiguous in
212kernel virtual space may not be contiguous as physical memory.  Since
213this API does not provide any scatter/gather capability, it will fail
214if the user tries to map a non-physically contiguous piece of memory.
215For this reason, it is recommended that memory mapped by this API be
216obtained only from sources which guarantee it to be physically contiguous
217(like kmalloc).
218
219Further, the physical address of the memory must be within the
220dma_mask of the device (the dma_mask represents a bit mask of the
221addressable region for the device.  I.e., if the physical address of
222the memory anded with the dma_mask is still equal to the physical
223address, then the device can perform DMA to the memory).  In order to
224ensure that the memory allocated by kmalloc is within the dma_mask,
225the driver may specify various platform-dependent flags to restrict
226the physical memory range of the allocation (e.g. on x86, GFP_DMA
227guarantees to be within the first 16Mb of available physical memory,
228as required by ISA devices).
229
230Note also that the above constraints on physical contiguity and
231dma_mask may not apply if the platform has an IOMMU (a device which
232supplies a physical to virtual mapping between the I/O memory bus and
233the device).  However, to be portable, device driver writers may *not*
234assume that such an IOMMU exists.
235
236Warnings:  Memory coherency operates at a granularity called the cache
237line width.  In order for memory mapped by this API to operate
238correctly, the mapped region must begin exactly on a cache line
239boundary and end exactly on one (to prevent two separately mapped
240regions from sharing a single cache line).  Since the cache line size
241may not be known at compile time, the API will not enforce this
242requirement.  Therefore, it is recommended that driver writers who
243don't take special care to determine the cache line size at run time
244only map virtual regions that begin and end on page boundaries (which
245are guaranteed also to be cache line boundaries).
246
247DMA_TO_DEVICE synchronisation must be done after the last modification
248of the memory region by the software and before it is handed off to
249the driver.  Once this primitive is used, memory covered by this
250primitive should be treated as read-only by the device.  If the device
251may write to it at any point, it should be DMA_BIDIRECTIONAL (see
252below).
253
254DMA_FROM_DEVICE synchronisation must be done before the driver
255accesses data that may be changed by the device.  This memory should
256be treated as read-only by the driver.  If the driver needs to write
257to it at any point, it should be DMA_BIDIRECTIONAL (see below).
258
259DMA_BIDIRECTIONAL requires special handling: it means that the driver
260isn't sure if the memory was modified before being handed off to the
261device and also isn't sure if the device will also modify it.  Thus,
262you must always sync bidirectional memory twice: once before the
263memory is handed off to the device (to make sure all memory changes
264are flushed from the processor) and once before the data may be
265accessed after being used by the device (to make sure any processor
266cache lines are updated with data that the device may have changed).
267
268void
269dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
270		 enum dma_data_direction direction)
271void
272pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr,
273		 size_t size, int direction)
274
275Unmaps the region previously mapped.  All the parameters passed in
276must be identical to those passed in (and returned) by the mapping
277API.
278
279dma_addr_t
280dma_map_page(struct device *dev, struct page *page,
281		    unsigned long offset, size_t size,
282		    enum dma_data_direction direction)
283dma_addr_t
284pci_map_page(struct pci_dev *hwdev, struct page *page,
285		    unsigned long offset, size_t size, int direction)
286void
287dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
288	       enum dma_data_direction direction)
289void
290pci_unmap_page(struct pci_dev *hwdev, dma_addr_t dma_address,
291	       size_t size, int direction)
292
293API for mapping and unmapping for pages.  All the notes and warnings
294for the other mapping APIs apply here.  Also, although the <offset>
295and <size> parameters are provided to do partial page mapping, it is
296recommended that you never use these unless you really know what the
297cache width is.
298
299int
300dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
301
302int
303pci_dma_mapping_error(struct pci_dev *hwdev, dma_addr_t dma_addr)
304
305In some circumstances dma_map_single and dma_map_page will fail to create
306a mapping. A driver can check for these errors by testing the returned
307dma address with dma_mapping_error(). A non-zero return value means the mapping
308could not be created and the driver should take appropriate action (e.g.
309reduce current DMA mapping usage or delay and try again later).
310
311	int
312	dma_map_sg(struct device *dev, struct scatterlist *sg,
313		int nents, enum dma_data_direction direction)
314	int
315	pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
316		int nents, int direction)
317
318Returns: the number of physical segments mapped (this may be shorter
319than <nents> passed in if some elements of the scatter/gather list are
320physically or virtually adjacent and an IOMMU maps them with a single
321entry).
322
323Please note that the sg cannot be mapped again if it has been mapped once.
324The mapping process is allowed to destroy information in the sg.
325
326As with the other mapping interfaces, dma_map_sg can fail. When it
327does, 0 is returned and a driver must take appropriate action. It is
328critical that the driver do something, in the case of a block driver
329aborting the request or even oopsing is better than doing nothing and
330corrupting the filesystem.
331
332With scatterlists, you use the resulting mapping like this:
333
334	int i, count = dma_map_sg(dev, sglist, nents, direction);
335	struct scatterlist *sg;
336
337	for_each_sg(sglist, sg, count, i) {
338		hw_address[i] = sg_dma_address(sg);
339		hw_len[i] = sg_dma_len(sg);
340	}
341
342where nents is the number of entries in the sglist.
343
344The implementation is free to merge several consecutive sglist entries
345into one (e.g. with an IOMMU, or if several pages just happen to be
346physically contiguous) and returns the actual number of sg entries it
347mapped them to. On failure 0, is returned.
348
349Then you should loop count times (note: this can be less than nents times)
350and use sg_dma_address() and sg_dma_len() macros where you previously
351accessed sg->address and sg->length as shown above.
352
353	void
354	dma_unmap_sg(struct device *dev, struct scatterlist *sg,
355		int nhwentries, enum dma_data_direction direction)
356	void
357	pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
358		int nents, int direction)
359
360Unmap the previously mapped scatter/gather list.  All the parameters
361must be the same as those and passed in to the scatter/gather mapping
362API.
363
364Note: <nents> must be the number you passed in, *not* the number of
365physical entries returned.
366
367void
368dma_sync_single(struct device *dev, dma_addr_t dma_handle, size_t size,
369		enum dma_data_direction direction)
370void
371pci_dma_sync_single(struct pci_dev *hwdev, dma_addr_t dma_handle,
372			   size_t size, int direction)
373void
374dma_sync_sg(struct device *dev, struct scatterlist *sg, int nelems,
375			  enum dma_data_direction direction)
376void
377pci_dma_sync_sg(struct pci_dev *hwdev, struct scatterlist *sg,
378		       int nelems, int direction)
379
380Synchronise a single contiguous or scatter/gather mapping.  All the
381parameters must be the same as those passed into the single mapping
382API.
383
384Notes:  You must do this:
385
386- Before reading values that have been written by DMA from the device
387  (use the DMA_FROM_DEVICE direction)
388- After writing values that will be written to the device using DMA
389  (use the DMA_TO_DEVICE) direction
390- before *and* after handing memory to the device if the memory is
391  DMA_BIDIRECTIONAL
392
393See also dma_map_single().
394
395dma_addr_t
396dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
397		     enum dma_data_direction dir,
398		     struct dma_attrs *attrs)
399
400void
401dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
402		       size_t size, enum dma_data_direction dir,
403		       struct dma_attrs *attrs)
404
405int
406dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
407		 int nents, enum dma_data_direction dir,
408		 struct dma_attrs *attrs)
409
410void
411dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
412		   int nents, enum dma_data_direction dir,
413		   struct dma_attrs *attrs)
414
415The four functions above are just like the counterpart functions
416without the _attrs suffixes, except that they pass an optional
417struct dma_attrs*.
418
419struct dma_attrs encapsulates a set of "dma attributes". For the
420definition of struct dma_attrs see linux/dma-attrs.h.
421
422The interpretation of dma attributes is architecture-specific, and
423each attribute should be documented in Documentation/DMA-attributes.txt.
424
425If struct dma_attrs* is NULL, the semantics of each of these
426functions is identical to those of the corresponding function
427without the _attrs suffix. As a result dma_map_single_attrs()
428can generally replace dma_map_single(), etc.
429
430As an example of the use of the *_attrs functions, here's how
431you could pass an attribute DMA_ATTR_FOO when mapping memory
432for DMA:
433
434#include <linux/dma-attrs.h>
435/* DMA_ATTR_FOO should be defined in linux/dma-attrs.h and
436 * documented in Documentation/DMA-attributes.txt */
437...
438
439	DEFINE_DMA_ATTRS(attrs);
440	dma_set_attr(DMA_ATTR_FOO, &attrs);
441	....
442	n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, &attr);
443	....
444
445Architectures that care about DMA_ATTR_FOO would check for its
446presence in their implementations of the mapping and unmapping
447routines, e.g.:
448
449void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
450			     size_t size, enum dma_data_direction dir,
451			     struct dma_attrs *attrs)
452{
453	....
454	int foo =  dma_get_attr(DMA_ATTR_FOO, attrs);
455	....
456	if (foo)
457		/* twizzle the frobnozzle */
458	....
459
460
461Part II - Advanced dma_ usage
462-----------------------------
463
464Warning: These pieces of the DMA API have no PCI equivalent.  They
465should also not be used in the majority of cases, since they cater for
466unlikely corner cases that don't belong in usual drivers.
467
468If you don't understand how cache line coherency works between a
469processor and an I/O device, you should not be using this part of the
470API at all.
471
472void *
473dma_alloc_noncoherent(struct device *dev, size_t size,
474			       dma_addr_t *dma_handle, gfp_t flag)
475
476Identical to dma_alloc_coherent() except that the platform will
477choose to return either consistent or non-consistent memory as it sees
478fit.  By using this API, you are guaranteeing to the platform that you
479have all the correct and necessary sync points for this memory in the
480driver should it choose to return non-consistent memory.
481
482Note: where the platform can return consistent memory, it will
483guarantee that the sync points become nops.
484
485Warning:  Handling non-consistent memory is a real pain.  You should
486only ever use this API if you positively know your driver will be
487required to work on one of the rare (usually non-PCI) architectures
488that simply cannot make consistent memory.
489
490void
491dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
492			      dma_addr_t dma_handle)
493
494Free memory allocated by the nonconsistent API.  All parameters must
495be identical to those passed in (and returned by
496dma_alloc_noncoherent()).
497
498int
499dma_is_consistent(struct device *dev, dma_addr_t dma_handle)
500
501Returns true if the device dev is performing consistent DMA on the memory
502area pointed to by the dma_handle.
503
504int
505dma_get_cache_alignment(void)
506
507Returns the processor cache alignment.  This is the absolute minimum
508alignment *and* width that you must observe when either mapping
509memory or doing partial flushes.
510
511Notes: This API may return a number *larger* than the actual cache
512line, but it will guarantee that one or more cache lines fit exactly
513into the width returned by this call.  It will also always be a power
514of two for easy alignment.
515
516void
517dma_sync_single_range(struct device *dev, dma_addr_t dma_handle,
518		      unsigned long offset, size_t size,
519		      enum dma_data_direction direction)
520
521Does a partial sync, starting at offset and continuing for size.  You
522must be careful to observe the cache alignment and width when doing
523anything like this.  You must also be extra careful about accessing
524memory you intend to sync partially.
525
526void
527dma_cache_sync(struct device *dev, void *vaddr, size_t size,
528	       enum dma_data_direction direction)
529
530Do a partial sync of memory that was allocated by
531dma_alloc_noncoherent(), starting at virtual address vaddr and
532continuing on for size.  Again, you *must* observe the cache line
533boundaries when doing this.
534
535int
536dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
537			    dma_addr_t device_addr, size_t size, int
538			    flags)
539
540Declare region of memory to be handed out by dma_alloc_coherent when
541it's asked for coherent memory for this device.
542
543bus_addr is the physical address to which the memory is currently
544assigned in the bus responding region (this will be used by the
545platform to perform the mapping).
546
547device_addr is the physical address the device needs to be programmed
548with actually to address this memory (this will be handed out as the
549dma_addr_t in dma_alloc_coherent()).
550
551size is the size of the area (must be multiples of PAGE_SIZE).
552
553flags can be or'd together and are:
554
555DMA_MEMORY_MAP - request that the memory returned from
556dma_alloc_coherent() be directly writable.
557
558DMA_MEMORY_IO - request that the memory returned from
559dma_alloc_coherent() be addressable using read/write/memcpy_toio etc.
560
561One or both of these flags must be present.
562
563DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by
564dma_alloc_coherent of any child devices of this one (for memory residing
565on a bridge).
566
567DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions.
568Do not allow dma_alloc_coherent() to fall back to system memory when
569it's out of memory in the declared region.
570
571The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and
572must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO
573if only DMA_MEMORY_MAP were passed in) for success or zero for
574failure.
575
576Note, for DMA_MEMORY_IO returns, all subsequent memory returned by
577dma_alloc_coherent() may no longer be accessed directly, but instead
578must be accessed using the correct bus functions.  If your driver
579isn't prepared to handle this contingency, it should not specify
580DMA_MEMORY_IO in the input flags.
581
582As a simplification for the platforms, only *one* such region of
583memory may be declared per device.
584
585For reasons of efficiency, most platforms choose to track the declared
586region only at the granularity of a page.  For smaller allocations,
587you should use the dma_pool() API.
588
589void
590dma_release_declared_memory(struct device *dev)
591
592Remove the memory region previously declared from the system.  This
593API performs *no* in-use checking for this region and will return
594unconditionally having removed all the required structures.  It is the
595driver's job to ensure that no parts of this memory region are
596currently in use.
597
598void *
599dma_mark_declared_memory_occupied(struct device *dev,
600				  dma_addr_t device_addr, size_t size)
601
602This is used to occupy specific regions of the declared space
603(dma_alloc_coherent() will hand out the first free region it finds).
604
605device_addr is the *device* address of the region requested.
606
607size is the size (and should be a page-sized multiple).
608
609The return value will be either a pointer to the processor virtual
610address of the memory, or an error (via PTR_ERR()) if any part of the
611region is occupied.
612