forked from ~ljy/RK356X_SDK_RELEASE

hc
2024-10-09 244b2c5ca8b14627e4a17755e5922221e121c771
kernel/Documentation/driver-api/dmaengine/provider.rst
....@@ -95,7 +95,7 @@
9595 ensure that it stayed compatible.
9696
9797 For more information on the Async TX API, please look the relevant
98
-documentation file in Documentation/crypto/async-tx-api.txt.
98
+documentation file in Documentation/crypto/async-tx-api.rst.
9999
100100 DMAEngine APIs
101101 ==============
....@@ -239,6 +239,43 @@
239239 want to transfer a portion of uncompressed data directly to the
240240 display to print it
241241
242
+- DMA_COMPLETION_NO_ORDER
243
+
244
+ - The device does not support in order completion.
245
+
246
+ - The driver should return DMA_OUT_OF_ORDER for device_tx_status if
247
+ the device is setting this capability.
248
+
249
+ - All cookie tracking and checking API should be treated as invalid if
250
+ the device exports this capability.
251
+
252
+ - At this point, this is incompatible with polling option for dmatest.
253
+
254
+ - If this cap is set, the user is recommended to provide an unique
255
+ identifier for each descriptor sent to the DMA device in order to
256
+ properly track the completion.
257
+
258
+- DMA_REPEAT
259
+
260
+ - The device supports repeated transfers. A repeated transfer, indicated by
261
+ the DMA_PREP_REPEAT transfer flag, is similar to a cyclic transfer in that
262
+ it gets automatically repeated when it ends, but can additionally be
263
+ replaced by the client.
264
+
265
+ - This feature is limited to interleaved transfers, this flag should thus not
266
+ be set if the DMA_INTERLEAVE flag isn't set. This limitation is based on
267
+ the current needs of DMA clients, support for additional transfer types
268
+ should be added in the future if and when the need arises.
269
+
270
+- DMA_LOAD_EOT
271
+
272
+ - The device supports replacing repeated transfers at end of transfer (EOT)
273
+ by queuing a new transfer with the DMA_PREP_LOAD_EOT flag set.
274
+
275
+ - Support for replacing a currently running transfer at another point (such
276
+ as end of burst instead of end of transfer) will be added in the future
277
+ based on DMA clients needs, if and when the need arises.
278
+
242279 These various types will also affect how the source and destination
243280 addresses change over time.
244281
....@@ -246,6 +283,62 @@
246283 after each transfer. In case of a ring buffer, they may loop
247284 (DMA_CYCLIC). Addresses pointing to a device's register (e.g. a FIFO)
248285 are typically fixed.
286
+
287
+Per descriptor metadata support
288
+-------------------------------
289
+Some data movement architecture (DMA controller and peripherals) uses metadata
290
+associated with a transaction. The DMA controller role is to transfer the
291
+payload and the metadata alongside.
292
+The metadata itself is not used by the DMA engine itself, but it contains
293
+parameters, keys, vectors, etc for peripheral or from the peripheral.
294
+
295
+The DMAengine framework provides a generic ways to facilitate the metadata for
296
+descriptors. Depending on the architecture the DMA driver can implement either
297
+or both of the methods and it is up to the client driver to choose which one
298
+to use.
299
+
300
+- DESC_METADATA_CLIENT
301
+
302
+ The metadata buffer is allocated/provided by the client driver and it is
303
+ attached (via the dmaengine_desc_attach_metadata() helper to the descriptor.
304
+
305
+ From the DMA driver the following is expected for this mode:
306
+
307
+ - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM
308
+
309
+ The data from the provided metadata buffer should be prepared for the DMA
310
+ controller to be sent alongside of the payload data. Either by copying to a
311
+ hardware descriptor, or highly coupled packet.
312
+
313
+ - DMA_DEV_TO_MEM
314
+
315
+ On transfer completion the DMA driver must copy the metadata to the client
316
+ provided metadata buffer before notifying the client about the completion.
317
+ After the transfer completion, DMA drivers must not touch the metadata
318
+ buffer provided by the client.
319
+
320
+- DESC_METADATA_ENGINE
321
+
322
+ The metadata buffer is allocated/managed by the DMA driver. The client driver
323
+ can ask for the pointer, maximum size and the currently used size of the
324
+ metadata and can directly update or read it. dmaengine_desc_get_metadata_ptr()
325
+ and dmaengine_desc_set_metadata_len() is provided as helper functions.
326
+
327
+ From the DMA driver the following is expected for this mode:
328
+
329
+ - get_metadata_ptr()
330
+
331
+ Should return a pointer for the metadata buffer, the maximum size of the
332
+ metadata buffer and the currently used / valid (if any) bytes in the buffer.
333
+
334
+ - set_metadata_len()
335
+
336
+ It is called by the clients after it have placed the metadata to the buffer
337
+ to let the DMA driver know the number of valid bytes provided.
338
+
339
+ Note: since the client will ask for the metadata pointer in the completion
340
+ callback (in DMA_DEV_TO_MEM case) the DMA driver must ensure that the
341
+ descriptor is not freed up prior the callback is called.
249342
250343 Device operations
251344 -----------------
....@@ -343,6 +436,9 @@
343436 - In the case of a cyclic transfer, it should only take into
344437 account the current period.
345438
439
+ - Should return DMA_OUT_OF_ORDER if the device does not support in order
440
+ completion and is completing the operation out of order.
441
+
346442 - This function can be called in an interrupt context.
347443
348444 - device_config
....@@ -432,7 +528,7 @@
432528 DMA_CTRL_ACK
433529
434530 - If clear, the descriptor cannot be reused by provider until the
435
- client acknowledges receipt, i.e. has has a chance to establish any
531
+ client acknowledges receipt, i.e. has a chance to establish any
436532 dependency chains
437533
438534 - This can be acked by invoking async_tx_ack()
....@@ -475,6 +571,34 @@
475571 writes for which the descriptor should be in different format from
476572 normal data descriptors.
477573
574
+- DMA_PREP_REPEAT
575
+
576
+ - If set, the transfer will be automatically repeated when it ends until a
577
+ new transfer is queued on the same channel with the DMA_PREP_LOAD_EOT flag.
578
+ If the next transfer to be queued on the channel does not have the
579
+ DMA_PREP_LOAD_EOT flag set, the current transfer will be repeated until the
580
+ client terminates all transfers.
581
+
582
+ - This flag is only supported if the channel reports the DMA_REPEAT
583
+ capability.
584
+
585
+- DMA_PREP_LOAD_EOT
586
+
587
+ - If set, the transfer will replace the transfer currently being executed at
588
+ the end of the transfer.
589
+
590
+ - This is the default behaviour for non-repeated transfers, specifying
591
+ DMA_PREP_LOAD_EOT for non-repeated transfers will thus make no difference.
592
+
593
+ - When using repeated transfers, DMA clients will usually need to set the
594
+ DMA_PREP_LOAD_EOT flag on all transfers, otherwise the channel will keep
595
+ repeating the last repeated transfer and ignore the new transfers being
596
+ queued. Failure to set DMA_PREP_LOAD_EOT will appear as if the channel was
597
+ stuck on the previous transfer.
598
+
599
+ - This flag is only supported if the channel reports the DMA_LOAD_EOT
600
+ capability.
601
+
478602 General Design Notes
479603 ====================
480604