Lines Matching +full:data +full:- +full:transfer
20 DMA-eligible devices to the controller itself. Whenever the device
21 will want to start a transfer, it will assert a DMA request (DRQ) by
25 parameter: the transfer size. At each clock cycle, it would transfer a
26 byte of data from one buffer to another, until the transfer size has
31 cycle. For example, we may want to transfer as much data as the
34 that requires data to be written exactly 16 or 24 bits at a time. This
36 parameter called the transfer width.
44 transfer into smaller sub-transfers.
47 that involve a single contiguous block of data. However, some of the
48 transfers we usually have are not, and want to copy data from
49 non-contiguous buffers to a contiguous buffer, which is called
50 scatter-gather.
53 scatter-gather. So we're left with two cases here: either we have a
56 that implements in hardware scatter-gather.
59 transfer, and whenever the transfer is started, the controller will go
66 to know where to fetch the data from.
73 transfer width and the transfer size.
79 These were just the general memory-to-memory (also called mem2mem) or
80 memory-to-device (mem2dev) kind of transfers. Most devices often
98 documentation file in Documentation/crypto/async-tx-api.rst.
104 ------------------------------------
114 - ``channels``: should be initialized as a list using the
117 - ``src_addr_widths``:
118 should contain a bitmask of the supported source transfer width
120 - ``dst_addr_widths``:
121 should contain a bitmask of the supported destination transfer width
123 - ``directions``:
127 - ``residue_granularity``:
128 granularity of the transfer residue reported to dma_set_residue.
131 - Descriptor:
136 - Segment:
139 - Burst:
142 - ``dev``: should hold the pointer to the ``struct device`` associated
146 ---------------------------
161 - DMA_MEMCPY
163 - The device is able to do memory to memory copies
165 - DMA_XOR
167 - The device is able to perform XOR operations on memory areas
169 - Used to accelerate XOR intensive tasks, such as RAID5
171 - DMA_XOR_VAL
173 - The device is able to perform parity check using the XOR
176 - DMA_PQ
178 - The device is able to perform RAID6 P+Q computations, P being a
179 simple XOR, and Q being a Reed-Solomon algorithm.
181 - DMA_PQ_VAL
183 - The device is able to perform parity check using RAID6 P+Q
186 - DMA_INTERRUPT
188 - The device is able to trigger a dummy transfer that will
191 - Used by the client drivers to register a callback that will be
194 - DMA_PRIVATE
196 - The devices only supports slave transfers, and as such isn't
199 - DMA_ASYNC_TX
201 - Must not be set by the device, and will be set by the framework
204 - TODO: What is it about?
206 - DMA_SLAVE
208 - The device can handle device to memory transfers, including
209 scatter-gather transfers.
211 - While in the mem2mem case we were having two distinct types to
216 - If you want to transfer a single contiguous memory buffer,
219 - DMA_CYCLIC
221 - The device can handle cyclic transfers.
223 - A cyclic transfer is a transfer where the chunk collection will
226 - It's usually used for audio transfers, where you want to operate
227 on a single ring buffer that you will fill with your audio data.
229 - DMA_INTERLEAVE
231 - The device supports interleaved transfer.
233 - These transfers can transfer data from a non-contiguous buffer
234 to a non-contiguous buffer, opposed to DMA_SLAVE that can
235 transfer data from a non-contiguous data set to a continuous
238 - It's usually used for 2d content transfers, in which case you
239 want to transfer a portion of uncompressed data directly to the
242 - DMA_COMPLETION_NO_ORDER
244 - The device does not support in order completion.
246 - The driver should return DMA_OUT_OF_ORDER for device_tx_status if
249 - All cookie tracking and checking API should be treated as invalid if
252 - At this point, this is incompatible with polling option for dmatest.
254 - If this cap is set, the user is recommended to provide an unique
258 - DMA_REPEAT
260 - The device supports repeated transfers. A repeated transfer, indicated by
261 the DMA_PREP_REPEAT transfer flag, is similar to a cyclic transfer in that
265 - This feature is limited to interleaved transfers, this flag should thus not
267 the current needs of DMA clients, support for additional transfer types
270 - DMA_LOAD_EOT
272 - The device supports replacing repeated transfers at end of transfer (EOT)
273 by queuing a new transfer with the DMA_PREP_LOAD_EOT flag set.
275 - Support for replacing a currently running transfer at another point (such
276 as end of burst instead of end of transfer) will be added in the future
283 after each transfer. In case of a ring buffer, they may loop
288 -------------------------------
289 Some data movement architecture (DMA controller and peripherals) uses metadata
290 associated with a transaction. The DMA controller role is to transfer the
300 - DESC_METADATA_CLIENT
307 - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM
309 The data from the provided metadata buffer should be prepared for the DMA
310 controller to be sent alongside of the payload data. Either by copying to a
313 - DMA_DEV_TO_MEM
315 On transfer completion the DMA driver must copy the metadata to the client
317 After the transfer completion, DMA drivers must not touch the metadata
320 - DESC_METADATA_ENGINE
329 - get_metadata_ptr()
334 - set_metadata_len()
344 -----------------
354 - ``device_alloc_chan_resources``
356 - ``device_free_chan_resources``
358 - These functions will be called whenever a driver will call
362 - They are in charge of allocating/freeing all the needed
365 - These functions can sleep.
367 - ``device_prep_dma_*``
369 - These functions are matching the capabilities you registered
372 - These functions all take the buffer or the scatterlist relevant
373 for the transfer being prepared, and should create a hardware
376 - These functions can be called from an interrupt context
378 - Any allocation you might do should be using the GFP_NOWAIT
382 - Drivers should try to pre-allocate any memory they might need
383 during the transfer setup at probe time to avoid putting to
386 - It should return a unique instance of the
388 particular transfer.
390 - This structure can be initialized using the function
393 - You'll also need to set two fields in this structure:
395 - flags:
399 - tx_submit: A pointer to a function you have to implement,
403 - In this structure the function pointer callback_result can be
411 - result: This provides the transfer result defined by
414 - residue: Provides the residue bytes of the transfer for those that
417 - ``device_issue_pending``
419 - Takes the first transaction descriptor in the pending queue,
420 and starts the transfer. Whenever that transfer is done, it
423 - This function can be called in an interrupt context
425 - ``device_tx_status``
427 - Should report the bytes left to go over on the given channel
429 - Should only care about the transaction descriptor passed as
432 - The tx_state argument might be NULL
434 - Should use dma_set_residue to report it
436 - In the case of a cyclic transfer, it should only take into
439 - Should return DMA_OUT_OF_ORDER if the device does not support in order
442 - This function can be called in an interrupt context.
444 - device_config
446 - Reconfigures the channel with the configuration given as argument
448 - This command should NOT perform synchronously, or on any
451 - In this case, the function will receive a ``dma_slave_config``
455 - Even though that structure contains a direction field, this
459 - This call is mandatory for slave operations only. This should NOT be
464 - device_pause
466 - Pauses a transfer on the channel
468 - This command should operate synchronously on the channel,
471 - device_resume
473 - Resumes a transfer on the channel
475 - This command should operate synchronously on the channel,
478 - device_terminate_all
480 - Aborts all the pending and ongoing transfers on the channel
482 - For aborted transfers the complete callback should not be called
484 - Can be called from atomic context or from within a complete
488 - Termination may be asynchronous. The driver does not have to
489 wait until the currently active transfer has completely stopped.
492 - device_synchronize
494 - Must synchronize the termination of a channel to the current
497 - Must make sure that memory for previously submitted
500 - Must make sure that all complete callbacks for previously
504 - May sleep.
515 - Should be called at the end of an async TX transfer, and can be
518 - Makes sure that dependent operations are run before marking it
523 - it's a DMA transaction ID that will increment over time.
525 - Not really relevant any more since the introduction of ``virt-dma``
530 - If clear, the descriptor cannot be reused by provider until the
534 - This can be acked by invoking async_tx_ack()
536 - If set, does not mean descriptor can be reused
540 - If set, the descriptor can be reused after being completed. It should
543 - The descriptor should be prepared for reuse by invoking
546 - ``dmaengine_desc_set_reuse()`` will succeed only when channel support
549 - As a consequence, if a device driver wants to skip the
551 because the DMA'd data wasn't used, it can resubmit the transfer right after
554 - Descriptor can be freed in few ways
556 - Clearing DMA_CTRL_REUSE by invoking
559 - Explicitly invoking ``dmaengine_desc_free()``, this can succeed only
562 - Terminating the channel
564 - DMA_PREP_CMD
566 - If set, the client driver tells DMA controller that passed data in DMA
567 API is command data.
569 - Interpretation of command data is DMA controller specific. It can be
572 normal data descriptors.
574 - DMA_PREP_REPEAT
576 - If set, the transfer will be automatically repeated when it ends until a
577 new transfer is queued on the same channel with the DMA_PREP_LOAD_EOT flag.
578 If the next transfer to be queued on the channel does not have the
579 DMA_PREP_LOAD_EOT flag set, the current transfer will be repeated until the
582 - This flag is only supported if the channel reports the DMA_REPEAT
585 - DMA_PREP_LOAD_EOT
587 - If set, the transfer will replace the transfer currently being executed at
588 the end of the transfer.
590 - This is the default behaviour for non-repeated transfers, specifying
591 DMA_PREP_LOAD_EOT for non-repeated transfers will thus make no difference.
593 - When using repeated transfers, DMA clients will usually need to set the
595 repeating the last repeated transfer and ignore the new transfers being
597 stuck on the previous transfer.
599 - This flag is only supported if the channel reports the DMA_LOAD_EOT
606 that handles the end of transfer interrupts in the handler, but defer
607 most work to a tasklet, including the start of a new transfer whenever
608 the previous transfer ended.
610 This is a rather inefficient design though, because the inter-transfer
613 in between, which will slow down the global transfer rate.
616 transfer in your tasklet, move that part to the interrupt handler in
623 - Burst: A number of consecutive read or write operations that
626 - Chunk: A contiguous collection of bursts
628 - Transfer: A collection of chunks (be it contiguous or not)