• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1               Dynamic DMA mapping using the generic device
2               ============================================
3
4        James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
5
6This document describes the DMA API.  For a more gentle introduction
7of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt.
8
9This API is split into two pieces.  Part I describes the basic API.
10Part II describes extensions for supporting non-consistent memory
11machines.  Unless you know that your driver absolutely has to support
12non-consistent platforms (this is usually only legacy platforms) you
13should only use the API described in part I.
14
15Part I - dma_ API
16-------------------------------------
17
18To get the dma_ API, you must #include <linux/dma-mapping.h>.  This
19provides dma_addr_t and the interfaces described below.
20
21A dma_addr_t can hold any valid DMA address for the platform.  It can be
22given to a device to use as a DMA source or target.  A CPU cannot reference
23a dma_addr_t directly because there may be translation between its physical
24address space and the DMA address space.
25
26Part Ia - Using large DMA-coherent buffers
27------------------------------------------
28
29void *
30dma_alloc_coherent(struct device *dev, size_t size,
31			     dma_addr_t *dma_handle, gfp_t flag)
32
33Consistent memory is memory for which a write by either the device or
34the processor can immediately be read by the processor or device
35without having to worry about caching effects.  (You may however need
36to make sure to flush the processor's write buffers before telling
37devices to read that memory.)
38
39This routine allocates a region of <size> bytes of consistent memory.
40
41It returns a pointer to the allocated region (in the processor's virtual
42address space) or NULL if the allocation failed.
43
44It also returns a <dma_handle> which may be cast to an unsigned integer the
45same width as the bus and given to the device as the DMA address base of
46the region.
47
48Note: consistent memory can be expensive on some platforms, and the
49minimum allocation length may be as big as a page, so you should
50consolidate your requests for consistent memory as much as possible.
51The simplest way to do that is to use the dma_pool calls (see below).
52
53The flag parameter (dma_alloc_coherent() only) allows the caller to
54specify the GFP_ flags (see kmalloc()) for the allocation (the
55implementation may choose to ignore flags that affect the location of
56the returned memory, like GFP_DMA).
57
58void *
59dma_zalloc_coherent(struct device *dev, size_t size,
60			     dma_addr_t *dma_handle, gfp_t flag)
61
62Wraps dma_alloc_coherent() and also zeroes the returned memory if the
63allocation attempt succeeded.
64
65void
66dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
67			   dma_addr_t dma_handle)
68
69Free a region of consistent memory you previously allocated.  dev,
70size and dma_handle must all be the same as those passed into
71dma_alloc_coherent().  cpu_addr must be the virtual address returned by
72the dma_alloc_coherent().
73
74Note that unlike their sibling allocation calls, these routines
75may only be called with IRQs enabled.
76
77
78Part Ib - Using small DMA-coherent buffers
79------------------------------------------
80
81To get this part of the dma_ API, you must #include <linux/dmapool.h>
82
83Many drivers need lots of small DMA-coherent memory regions for DMA
84descriptors or I/O buffers.  Rather than allocating in units of a page
85or more using dma_alloc_coherent(), you can use DMA pools.  These work
86much like a struct kmem_cache, except that they use the DMA-coherent allocator,
87not __get_free_pages().  Also, they understand common hardware constraints
88for alignment, like queue heads needing to be aligned on N-byte boundaries.
89
90
91	struct dma_pool *
92	dma_pool_create(const char *name, struct device *dev,
93			size_t size, size_t align, size_t alloc);
94
95dma_pool_create() initializes a pool of DMA-coherent buffers
96for use with a given device.  It must be called in a context which
97can sleep.
98
99The "name" is for diagnostics (like a struct kmem_cache name); dev and size
100are like what you'd pass to dma_alloc_coherent().  The device's hardware
101alignment requirement for this type of data is "align" (which is expressed
102in bytes, and must be a power of two).  If your device has no boundary
103crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
104from this pool must not cross 4KByte boundaries.
105
106
107	void *dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags,
108			      dma_addr_t *handle)
109
110Wraps dma_pool_alloc() and also zeroes the returned memory if the
111allocation attempt succeeded.
112
113
114	void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
115			dma_addr_t *dma_handle);
116
117This allocates memory from the pool; the returned memory will meet the
118size and alignment requirements specified at creation time.  Pass
119GFP_ATOMIC to prevent blocking, or if it's permitted (not
120in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow
121blocking.  Like dma_alloc_coherent(), this returns two values:  an
122address usable by the CPU, and the DMA address usable by the pool's
123device.
124
125
126	void dma_pool_free(struct dma_pool *pool, void *vaddr,
127			dma_addr_t addr);
128
129This puts memory back into the pool.  The pool is what was passed to
130dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what
131were returned when that routine allocated the memory being freed.
132
133
134	void dma_pool_destroy(struct dma_pool *pool);
135
136dma_pool_destroy() frees the resources of the pool.  It must be
137called in a context which can sleep.  Make sure you've freed all allocated
138memory back to the pool before you destroy it.
139
140
141Part Ic - DMA addressing limitations
142------------------------------------
143
144int
145dma_set_mask_and_coherent(struct device *dev, u64 mask)
146
147Checks to see if the mask is possible and updates the device
148streaming and coherent DMA mask parameters if it is.
149
150Returns: 0 if successful and a negative error if not.
151
152int
153dma_set_mask(struct device *dev, u64 mask)
154
155Checks to see if the mask is possible and updates the device
156parameters if it is.
157
158Returns: 0 if successful and a negative error if not.
159
160int
161dma_set_coherent_mask(struct device *dev, u64 mask)
162
163Checks to see if the mask is possible and updates the device
164parameters if it is.
165
166Returns: 0 if successful and a negative error if not.
167
168u64
169dma_get_required_mask(struct device *dev)
170
171This API returns the mask that the platform requires to
172operate efficiently.  Usually this means the returned mask
173is the minimum required to cover all of memory.  Examining the
174required mask gives drivers with variable descriptor sizes the
175opportunity to use smaller descriptors as necessary.
176
177Requesting the required mask does not alter the current mask.  If you
178wish to take advantage of it, you should issue a dma_set_mask()
179call to set the mask to the value returned.
180
181
182Part Id - Streaming DMA mappings
183--------------------------------
184
185dma_addr_t
186dma_map_single(struct device *dev, void *cpu_addr, size_t size,
187		      enum dma_data_direction direction)
188
189Maps a piece of processor virtual memory so it can be accessed by the
190device and returns the DMA address of the memory.
191
192The direction for both APIs may be converted freely by casting.
193However the dma_ API uses a strongly typed enumerator for its
194direction:
195
196DMA_NONE		no direction (used for debugging)
197DMA_TO_DEVICE		data is going from the memory to the device
198DMA_FROM_DEVICE		data is coming from the device to the memory
199DMA_BIDIRECTIONAL	direction isn't known
200
201Notes:  Not all memory regions in a machine can be mapped by this API.
202Further, contiguous kernel virtual space may not be contiguous as
203physical memory.  Since this API does not provide any scatter/gather
204capability, it will fail if the user tries to map a non-physically
205contiguous piece of memory.  For this reason, memory to be mapped by
206this API should be obtained from sources which guarantee it to be
207physically contiguous (like kmalloc).
208
209Further, the DMA address of the memory must be within the
210dma_mask of the device (the dma_mask is a bit mask of the
211addressable region for the device, i.e., if the DMA address of
212the memory ANDed with the dma_mask is still equal to the DMA
213address, then the device can perform DMA to the memory).  To
214ensure that the memory allocated by kmalloc is within the dma_mask,
215the driver may specify various platform-dependent flags to restrict
216the DMA address range of the allocation (e.g., on x86, GFP_DMA
217guarantees to be within the first 16MB of available DMA addresses,
218as required by ISA devices).
219
220Note also that the above constraints on physical contiguity and
221dma_mask may not apply if the platform has an IOMMU (a device which
222maps an I/O DMA address to a physical memory address).  However, to be
223portable, device driver writers may *not* assume that such an IOMMU
224exists.
225
226Warnings:  Memory coherency operates at a granularity called the cache
227line width.  In order for memory mapped by this API to operate
228correctly, the mapped region must begin exactly on a cache line
229boundary and end exactly on one (to prevent two separately mapped
230regions from sharing a single cache line).  Since the cache line size
231may not be known at compile time, the API will not enforce this
232requirement.  Therefore, it is recommended that driver writers who
233don't take special care to determine the cache line size at run time
234only map virtual regions that begin and end on page boundaries (which
235are guaranteed also to be cache line boundaries).
236
237DMA_TO_DEVICE synchronisation must be done after the last modification
238of the memory region by the software and before it is handed off to
239the driver.  Once this primitive is used, memory covered by this
240primitive should be treated as read-only by the device.  If the device
241may write to it at any point, it should be DMA_BIDIRECTIONAL (see
242below).
243
244DMA_FROM_DEVICE synchronisation must be done before the driver
245accesses data that may be changed by the device.  This memory should
246be treated as read-only by the driver.  If the driver needs to write
247to it at any point, it should be DMA_BIDIRECTIONAL (see below).
248
249DMA_BIDIRECTIONAL requires special handling: it means that the driver
250isn't sure if the memory was modified before being handed off to the
251device and also isn't sure if the device will also modify it.  Thus,
252you must always sync bidirectional memory twice: once before the
253memory is handed off to the device (to make sure all memory changes
254are flushed from the processor) and once before the data may be
255accessed after being used by the device (to make sure any processor
256cache lines are updated with data that the device may have changed).
257
258void
259dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
260		 enum dma_data_direction direction)
261
262Unmaps the region previously mapped.  All the parameters passed in
263must be identical to those passed in (and returned) by the mapping
264API.
265
266dma_addr_t
267dma_map_page(struct device *dev, struct page *page,
268		    unsigned long offset, size_t size,
269		    enum dma_data_direction direction)
270void
271dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
272	       enum dma_data_direction direction)
273
274API for mapping and unmapping for pages.  All the notes and warnings
275for the other mapping APIs apply here.  Also, although the <offset>
276and <size> parameters are provided to do partial page mapping, it is
277recommended that you never use these unless you really know what the
278cache width is.
279
280int
281dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
282
283In some circumstances dma_map_single() and dma_map_page() will fail to create
284a mapping. A driver can check for these errors by testing the returned
285DMA address with dma_mapping_error(). A non-zero return value means the mapping
286could not be created and the driver should take appropriate action (e.g.
287reduce current DMA mapping usage or delay and try again later).
288
289	int
290	dma_map_sg(struct device *dev, struct scatterlist *sg,
291		int nents, enum dma_data_direction direction)
292
293Returns: the number of DMA address segments mapped (this may be shorter
294than <nents> passed in if some elements of the scatter/gather list are
295physically or virtually adjacent and an IOMMU maps them with a single
296entry).
297
298Please note that the sg cannot be mapped again if it has been mapped once.
299The mapping process is allowed to destroy information in the sg.
300
301As with the other mapping interfaces, dma_map_sg() can fail. When it
302does, 0 is returned and a driver must take appropriate action. It is
303critical that the driver do something, in the case of a block driver
304aborting the request or even oopsing is better than doing nothing and
305corrupting the filesystem.
306
307With scatterlists, you use the resulting mapping like this:
308
309	int i, count = dma_map_sg(dev, sglist, nents, direction);
310	struct scatterlist *sg;
311
312	for_each_sg(sglist, sg, count, i) {
313		hw_address[i] = sg_dma_address(sg);
314		hw_len[i] = sg_dma_len(sg);
315	}
316
317where nents is the number of entries in the sglist.
318
319The implementation is free to merge several consecutive sglist entries
320into one (e.g. with an IOMMU, or if several pages just happen to be
321physically contiguous) and returns the actual number of sg entries it
322mapped them to. On failure 0, is returned.
323
324Then you should loop count times (note: this can be less than nents times)
325and use sg_dma_address() and sg_dma_len() macros where you previously
326accessed sg->address and sg->length as shown above.
327
328	void
329	dma_unmap_sg(struct device *dev, struct scatterlist *sg,
330		int nents, enum dma_data_direction direction)
331
332Unmap the previously mapped scatter/gather list.  All the parameters
333must be the same as those and passed in to the scatter/gather mapping
334API.
335
336Note: <nents> must be the number you passed in, *not* the number of
337DMA address entries returned.
338
339void
340dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
341			enum dma_data_direction direction)
342void
343dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
344			   enum dma_data_direction direction)
345void
346dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nents,
347		    enum dma_data_direction direction)
348void
349dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nents,
350		       enum dma_data_direction direction)
351
352Synchronise a single contiguous or scatter/gather mapping for the CPU
353and device. With the sync_sg API, all the parameters must be the same
354as those passed into the single mapping API. With the sync_single API,
355you can use dma_handle and size parameters that aren't identical to
356those passed into the single mapping API to do a partial sync.
357
358Notes:  You must do this:
359
360- Before reading values that have been written by DMA from the device
361  (use the DMA_FROM_DEVICE direction)
362- After writing values that will be written to the device using DMA
363  (use the DMA_TO_DEVICE) direction
364- before *and* after handing memory to the device if the memory is
365  DMA_BIDIRECTIONAL
366
367See also dma_map_single().
368
369dma_addr_t
370dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
371		     enum dma_data_direction dir,
372		     struct dma_attrs *attrs)
373
374void
375dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
376		       size_t size, enum dma_data_direction dir,
377		       struct dma_attrs *attrs)
378
379int
380dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
381		 int nents, enum dma_data_direction dir,
382		 struct dma_attrs *attrs)
383
384void
385dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
386		   int nents, enum dma_data_direction dir,
387		   struct dma_attrs *attrs)
388
389The four functions above are just like the counterpart functions
390without the _attrs suffixes, except that they pass an optional
391struct dma_attrs*.
392
393struct dma_attrs encapsulates a set of "DMA attributes". For the
394definition of struct dma_attrs see linux/dma-attrs.h.
395
396The interpretation of DMA attributes is architecture-specific, and
397each attribute should be documented in Documentation/DMA-attributes.txt.
398
399If struct dma_attrs* is NULL, the semantics of each of these
400functions is identical to those of the corresponding function
401without the _attrs suffix. As a result dma_map_single_attrs()
402can generally replace dma_map_single(), etc.
403
404As an example of the use of the *_attrs functions, here's how
405you could pass an attribute DMA_ATTR_FOO when mapping memory
406for DMA:
407
408#include <linux/dma-attrs.h>
409/* DMA_ATTR_FOO should be defined in linux/dma-attrs.h and
410 * documented in Documentation/DMA-attributes.txt */
411...
412
413	DEFINE_DMA_ATTRS(attrs);
414	dma_set_attr(DMA_ATTR_FOO, &attrs);
415	....
416	n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, &attr);
417	....
418
419Architectures that care about DMA_ATTR_FOO would check for its
420presence in their implementations of the mapping and unmapping
421routines, e.g.:
422
423void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
424			     size_t size, enum dma_data_direction dir,
425			     struct dma_attrs *attrs)
426{
427	....
428	int foo =  dma_get_attr(DMA_ATTR_FOO, attrs);
429	....
430	if (foo)
431		/* twizzle the frobnozzle */
432	....
433
434
435Part II - Advanced dma_ usage
436-----------------------------
437
438Warning: These pieces of the DMA API should not be used in the
439majority of cases, since they cater for unlikely corner cases that
440don't belong in usual drivers.
441
442If you don't understand how cache line coherency works between a
443processor and an I/O device, you should not be using this part of the
444API at all.
445
446void *
447dma_alloc_noncoherent(struct device *dev, size_t size,
448			       dma_addr_t *dma_handle, gfp_t flag)
449
450Identical to dma_alloc_coherent() except that the platform will
451choose to return either consistent or non-consistent memory as it sees
452fit.  By using this API, you are guaranteeing to the platform that you
453have all the correct and necessary sync points for this memory in the
454driver should it choose to return non-consistent memory.
455
456Note: where the platform can return consistent memory, it will
457guarantee that the sync points become nops.
458
459Warning:  Handling non-consistent memory is a real pain.  You should
460only use this API if you positively know your driver will be
461required to work on one of the rare (usually non-PCI) architectures
462that simply cannot make consistent memory.
463
464void
465dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
466			      dma_addr_t dma_handle)
467
468Free memory allocated by the nonconsistent API.  All parameters must
469be identical to those passed in (and returned by
470dma_alloc_noncoherent()).
471
472int
473dma_get_cache_alignment(void)
474
475Returns the processor cache alignment.  This is the absolute minimum
476alignment *and* width that you must observe when either mapping
477memory or doing partial flushes.
478
479Notes: This API may return a number *larger* than the actual cache
480line, but it will guarantee that one or more cache lines fit exactly
481into the width returned by this call.  It will also always be a power
482of two for easy alignment.
483
484void
485dma_cache_sync(struct device *dev, void *vaddr, size_t size,
486	       enum dma_data_direction direction)
487
488Do a partial sync of memory that was allocated by
489dma_alloc_noncoherent(), starting at virtual address vaddr and
490continuing on for size.  Again, you *must* observe the cache line
491boundaries when doing this.
492
493int
494dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
495			    dma_addr_t device_addr, size_t size, int
496			    flags)
497
498Declare region of memory to be handed out by dma_alloc_coherent() when
499it's asked for coherent memory for this device.
500
501phys_addr is the CPU physical address to which the memory is currently
502assigned (this will be ioremapped so the CPU can access the region).
503
504device_addr is the DMA address the device needs to be programmed
505with to actually address this memory (this will be handed out as the
506dma_addr_t in dma_alloc_coherent()).
507
508size is the size of the area (must be multiples of PAGE_SIZE).
509
510flags can be ORed together and are:
511
512DMA_MEMORY_MAP - request that the memory returned from
513dma_alloc_coherent() be directly writable.
514
515DMA_MEMORY_IO - request that the memory returned from
516dma_alloc_coherent() be addressable using read()/write()/memcpy_toio() etc.
517
518One or both of these flags must be present.
519
520DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by
521dma_alloc_coherent of any child devices of this one (for memory residing
522on a bridge).
523
524DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions.
525Do not allow dma_alloc_coherent() to fall back to system memory when
526it's out of memory in the declared region.
527
528The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and
529must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO
530if only DMA_MEMORY_MAP were passed in) for success or zero for
531failure.
532
533Note, for DMA_MEMORY_IO returns, all subsequent memory returned by
534dma_alloc_coherent() may no longer be accessed directly, but instead
535must be accessed using the correct bus functions.  If your driver
536isn't prepared to handle this contingency, it should not specify
537DMA_MEMORY_IO in the input flags.
538
539As a simplification for the platforms, only *one* such region of
540memory may be declared per device.
541
542For reasons of efficiency, most platforms choose to track the declared
543region only at the granularity of a page.  For smaller allocations,
544you should use the dma_pool() API.
545
546void
547dma_release_declared_memory(struct device *dev)
548
549Remove the memory region previously declared from the system.  This
550API performs *no* in-use checking for this region and will return
551unconditionally having removed all the required structures.  It is the
552driver's job to ensure that no parts of this memory region are
553currently in use.
554
555void *
556dma_mark_declared_memory_occupied(struct device *dev,
557				  dma_addr_t device_addr, size_t size)
558
559This is used to occupy specific regions of the declared space
560(dma_alloc_coherent() will hand out the first free region it finds).
561
562device_addr is the *device* address of the region requested.
563
564size is the size (and should be a page-sized multiple).
565
566The return value will be either a pointer to the processor virtual
567address of the memory, or an error (via PTR_ERR()) if any part of the
568region is occupied.
569
570Part III - Debug drivers use of the DMA-API
571-------------------------------------------
572
573The DMA-API as described above has some constraints. DMA addresses must be
574released with the corresponding function with the same size for example. With
575the advent of hardware IOMMUs it becomes more and more important that drivers
576do not violate those constraints. In the worst case such a violation can
577result in data corruption up to destroyed filesystems.
578
579To debug drivers and find bugs in the usage of the DMA-API checking code can
580be compiled into the kernel which will tell the developer about those
581violations. If your architecture supports it you can select the "Enable
582debugging of DMA-API usage" option in your kernel configuration. Enabling this
583option has a performance impact. Do not enable it in production kernels.
584
585If you boot the resulting kernel will contain code which does some bookkeeping
586about what DMA memory was allocated for which device. If this code detects an
587error it prints a warning message with some details into your kernel log. An
588example warning message may look like this:
589
590------------[ cut here ]------------
591WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
592	check_unmap+0x203/0x490()
593Hardware name:
594forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
595	function [device address=0x00000000640444be] [size=66 bytes] [mapped as
596single] [unmapped as page]
597Modules linked in: nfsd exportfs bridge stp llc r8169
598Pid: 0, comm: swapper Tainted: G        W  2.6.28-dmatest-09289-g8bb99c0 #1
599Call Trace:
600 <IRQ>  [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
601 [<ffffffff80647b70>] _spin_unlock+0x10/0x30
602 [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
603 [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
604 [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
605 [<ffffffff80252f96>] queue_work+0x56/0x60
606 [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
607 [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
608 [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
609 [<ffffffff80235177>] find_busiest_group+0x207/0x8a0
610 [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
611 [<ffffffff803c7ea3>] check_unmap+0x203/0x490
612 [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
613 [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
614 [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
615 [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
616 [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
617 [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
618 [<ffffffff8020c093>] ret_from_intr+0x0/0xa
619 <EOI> <4>---[ end trace f6435a98e2a38c0e ]---
620
621The driver developer can find the driver and the device including a stacktrace
622of the DMA-API call which caused this warning.
623
624Per default only the first error will result in a warning message. All other
625errors will only silently counted. This limitation exist to prevent the code
626from flooding your kernel log. To support debugging a device driver this can
627be disabled via debugfs. See the debugfs interface documentation below for
628details.
629
630The debugfs directory for the DMA-API debugging code is called dma-api/. In
631this directory the following files can currently be found:
632
633	dma-api/all_errors	This file contains a numeric value. If this
634				value is not equal to zero the debugging code
635				will print a warning for every error it finds
636				into the kernel log. Be careful with this
637				option, as it can easily flood your logs.
638
639	dma-api/disabled	This read-only file contains the character 'Y'
640				if the debugging code is disabled. This can
641				happen when it runs out of memory or if it was
642				disabled at boot time
643
644	dma-api/error_count	This file is read-only and shows the total
645				numbers of errors found.
646
647	dma-api/num_errors	The number in this file shows how many
648				warnings will be printed to the kernel log
649				before it stops. This number is initialized to
650				one at system boot and be set by writing into
651				this file
652
653	dma-api/min_free_entries
654				This read-only file can be read to get the
655				minimum number of free dma_debug_entries the
656				allocator has ever seen. If this value goes
657				down to zero the code will disable itself
658				because it is not longer reliable.
659
660	dma-api/num_free_entries
661				The current number of free dma_debug_entries
662				in the allocator.
663
664	dma-api/driver-filter
665				You can write a name of a driver into this file
666				to limit the debug output to requests from that
667				particular driver. Write an empty string to
668				that file to disable the filter and see
669				all errors again.
670
671If you have this code compiled into your kernel it will be enabled by default.
672If you want to boot without the bookkeeping anyway you can provide
673'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
674Notice that you can not enable it again at runtime. You have to reboot to do
675so.
676
677If you want to see debug messages only for a special device driver you can
678specify the dma_debug_driver=<drivername> parameter. This will enable the
679driver filter at boot time. The debug code will only print errors for that
680driver afterwards. This filter can be disabled or changed later using debugfs.
681
682When the code disables itself at runtime this is most likely because it ran
683out of dma_debug_entries. These entries are preallocated at boot. The number
684of preallocated entries is defined per architecture. If it is too low for you
685boot with 'dma_debug_entries=<your_desired_number>' to overwrite the
686architectural default.
687
688void debug_dmap_mapping_error(struct device *dev, dma_addr_t dma_addr);
689
690dma-debug interface debug_dma_mapping_error() to debug drivers that fail
691to check DMA mapping errors on addresses returned by dma_map_single() and
692dma_map_page() interfaces. This interface clears a flag set by
693debug_dma_map_page() to indicate that dma_mapping_error() has been called by
694the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
695this flag is still set, prints warning message that includes call trace that
696leads up to the unmap. This interface can be called from dma_mapping_error()
697routines to enable DMA mapping error check debugging.
698
699