• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Video Decoding
2
3<!--Kit: AVCodec Kit-->
4<!--Subsystem: Multimedia-->
5<!--Owner: @zhanghongran-->
6<!--Designer: @dpy2650--->
7<!--Tester: @cyakee-->
8<!--Adviser: @zengyawen-->
9
10You can call native APIs to perform video decoding, which decodes media data into a YUV file or renders it.
11
12<!--RP3--><!--RP3End-->
13
14For details about the supported decoding capabilities, see [AVCodec Supported Formats](avcodec-support-formats.md#video-decoding).
15
16<!--RP1--><!--RP1End-->
17
18Through the VideoDecoder module, your application can implement the following key capabilities.
19
20|          Capability                      |             How to Configure                                                                    |
21| --------------------------------------- | ---------------------------------------------------------------------------------- |
22| Variable resolution        | The decoder supports the change of the input stream resolution. After the resolution is changed, the callback function **OnStreamChanged()** set by **OH_VideoDecoder_RegisterCallback** is triggered. For details, see step 3 in surface mode or step 3 in buffer mode. |
23| Dynamic surface switching | Call **OH_VideoDecoder_SetSurface** to configure this capability. It is supported only in surface mode. For details, see step 6 in surface mode.   |
24| Low-latency decoding | Call **OH_VideoDecoder_Configure** to configure this capability. For details, see step 5 in surface mode or step 5 in buffer mode.     |
25
26## Constraints
27
28- After **flush()**, **reset()**, or **stop()** is called, the PPS/SPS must be transferred again in the **start()** call. For details about the example, see step 13 in [Surface Mode](#surface-mode).
29- If **flush()**, **reset()**, **stop()**, or **destroy()** is executed in a non-callback thread, the execution result is returned after all callbacks are executed.
30- Due to limited hardware decoder resources, you must call **OH_VideoDecoder_Destroy** to destroy every decoder instance when it is no longer needed.
31- The input streams for video decoding support only the AnnexB format, and the supported AnnexB format supports multiple slices. However, the slices of the same frame must be sent to the decoder at a time.
32- When **flush()**, **reset()**, or **stop()** is called, do not continue to operate the OH_AVBuffer obtained through the previous callback function.
33- The DRM decryption capability supports both non-secure and secure video channels in [surface mode](#surface-mode), but only non-secure video channels in buffer mode(#buffer-mode).
34- The buffer mode and surface mode use the same APIs. Therefore, the surface mode is described as an example.
35- In buffer mode, after obtaining the pointer to an OH_AVBuffer instance through the callback function **OH_AVCodecOnNewOutputBuffer**, call **OH_VideoDecoder_FreeOutputBuffer** to notify the system that the buffer has been fully utilized. In this way, the system can write the subsequently decoded data to the corresponding location. If the OH_NativeBuffer instance is obtained through **OH_AVBuffer_GetNativeBuffer** and its lifecycle extends beyond that of the OH_AVBuffer pointer instance, you mut perform data duplication. In this case, you should manage the lifecycle of the newly generated OH_NativeBuffer object to ensure that the object can be correctly used and released.
36<!--RP6--><!--RP6End-->
37
38## Surface Output and Buffer Output
39
40- Surface output and buffer output differ in data output modes.
41- They are applicable to different scenarios.
42  - Surface output indicates that the OHNativeWindow is used to transfer output data. It supports connection with other modules, such as the **XComponent**.
43  - Buffer output indicates that decoded data is output in shared memory mode.
44
45- The two also differ slightly in the API calling modes:
46  - In surface mode, you can choose to call **OH_VideoDecoder_FreeOutputBuffer** to free the output buffer (without rendering the data). In buffer mode, you must call **OH_VideoDecoder_FreeOutputBuffer** to free the output buffer.
47  - In surface mode, you must call **OH_VideoDecoder_SetSurface** to set an OHNativeWindow before the decoder is ready and call **OH_VideoDecoder_RenderOutputBuffer** to render the decoded data after the decoder is started.
48  - In buffer mode, an application can obtain the shared memory address and data from the output buffer. In surface mode, an application can obtain the data from the output buffer.
49
50- Data transfer performance in surface mode is better than that in buffer mode.
51
52For details about the development procedure, see [Surface Mode](#surface-mode) and [Buffer Mode](#buffer-mode).
53
54## State Machine Interaction
55
56The following figure shows the interaction between states.
57
58![Invoking relationship of state](figures/state-invocation.png)
59
601. A decoder enters the Initialized state in either of the following ways:
61   - When a decoder instance is initially created, the decoder enters the Initialized state.
62   - When **OH_VideoDecoder_Reset** is called in any state, the decoder returns to the Initialized state.
63
642. When the decoder is in the Initialized state, you can call **OH_VideoDecoder_Configure** to configure the decoder. After the configuration, the decoder enters the Configured state.
653. When the decoder is in the Configured state, you can call **OH_VideoDecoder_Prepare** to switch it to the Prepared state.
664. When the decoder is in the Prepared state, you can call **OH_VideoDecoder_Start** to switch it to the Executing state.
67   - When the decoder is in the Executing state, you can call **OH_VideoDecoder_Stop** to switch it back to the Prepared state.
68
695. In rare cases, the decoder may encounter an error and enter the Error state. If this is the case, an invalid value can be returned or an exception can be thrown through a queue operation.
70   - When the decoder is in the Error state, you can either call **OH_VideoDecoder_Reset** to switch it to the Initialized state or call **OH_VideoDecoder_Destroy** to switch it to the Released state.
71
726. The Executing state has three substates: Flushed, Running, and End-of-Stream.
73  - After **OH_VideoDecoder_Start** is called, the decoder enters the Running substate immediately.
74  - When the decoder is in the Executing state, you can call **OH_VideoDecoder_Flush** to switch it to the Flushed substate.
75  - After all data to be processed is transferred to the decoder, the [AVCODEC_BUFFER_FLAGS_EOS](../../reference/apis-avcodec-kit/_core.md#oh_avcodecbufferflags-1) flag is added to the last input buffer in the input buffers queue. Once this flag is detected, the decoder transits to the End-of-Stream substate. In this state, the decoder does not accept new inputs, but continues to generate outputs until it reaches the tail frame.
76
777. When the decoder is no longer needed, you must call **OH_VideoDecoder_Destroy** to destroy the decoder instance, which then transitions to the Released state.
78
79## How to Develop
80
81Read [VideoDecoder](../../reference/apis-avcodec-kit/_video_decoder.md) for the API reference.
82
83The figure below shows the call relationship of video decoding.
84
85- The dotted line indicates an optional operation.
86
87- The solid line indicates a mandatory operation.
88
89![Call relationship of video decoding](figures/video-decode.png)
90
91### Linking the Dynamic Libraries in the CMake Script
92
93``` cmake
94target_link_libraries(sample PUBLIC libnative_media_codecbase.so)
95target_link_libraries(sample PUBLIC libnative_media_core.so)
96target_link_libraries(sample PUBLIC libnative_media_vdec.so)
97```
98
99> **NOTE**
100>
101> The word **sample** in the preceding code snippet is only an example. Use the actual project directory name.
102>
103
104### Defining the Basic Structure
105
106The sample code provided in this section adheres to the C++17 standard and is for reference only. You can define your own buffer objects by referring to it.
107
1081. Add the header files.
109
110    ```c++
111    #include <condition_variable>
112    #include <memory>
113    #include <mutex>
114    #include <queue>
115    #include <shared_mutex>
116    ```
117
1182. Define the information about the decoder callback buffer.
119
120    ```c++
121    struct CodecBufferInfo {
122        CodecBufferInfo(uint32_t index, OH_AVBuffer *buffer): index(index), buffer(buffer), isValid(true) {}
123        // Callback buffer.
124        OH_AVBuffer *buffer = nullptr;
125        // Index of the callback buffer.
126        uint32_t index = 0;
127        // Check whether the current buffer information is valid.
128        bool isValid = true;
129    };
130    ```
131
1323. Define the input and output queue for decoding.
133
134    ```c++
135    class CodecBufferQueue {
136    public:
137        // Pass the callback buffer information to the queue.
138        void Enqueue(const std::shared_ptr<CodecBufferInfo> bufferInfo)
139        {
140            std::unique_lock<std::mutex> lock(mutex_);
141            bufferQueue_.push(bufferInfo);
142            cond_.notify_all();
143        }
144
145        // Obtain the information about the callback buffer.
146        std::shared_ptr<CodecBufferInfo> Dequeue(int32_t timeoutMs = 1000)
147        {
148            std::unique_lock<std::mutex> lock(mutex_);
149            (void)cond_.wait_for(lock, std::chrono::milliseconds(timeoutMs), [this]() { return !bufferQueue_.empty(); });
150            if (bufferQueue_.empty()) {
151                return nullptr;
152            }
153            std::shared_ptr<CodecBufferInfo> bufferInfo = bufferQueue_.front();
154            bufferQueue_.pop();
155            return bufferInfo;
156        }
157
158        // Clear the queue. The previous callback buffer becomes unavailable.
159        void Flush()
160        {
161            std::unique_lock<std::mutex> lock(mutex_);
162            while (!bufferQueue_.empty()) {
163                std::shared_ptr<CodecBufferInfo> bufferInfo = bufferQueue_.front();
164                // After the flush, stop, reset, and destroy operations are performed, the previous callback buffer information is invalid.
165                bufferInfo->isValid = false;
166                bufferQueue_.pop();
167            }
168        }
169
170    private:
171        std::mutex mutex_;
172        std::condition_variable cond_;
173        std::queue<std::shared_ptr<CodecBufferInfo>> bufferQueue_;
174    };
175    ```
176
1774. Configure global variables.
178
179    These global variables are for reference only. They can be encapsulated into an object based on service requirements.
180
181    ```c++
182    // Video frame width.
183    int32_t width = 320;
184    // Video frame height.
185    int32_t height = 240;
186    // Video pixel format.
187     OH_AVPixelFormat pixelFormat = AV_PIXEL_FORMAT_NV12;
188    // Video width stride.
189    int32_t widthStride = 0;
190    // Video height stride.
191    int32_t heightStride = 0;
192    // Pointer to the decoder instance.
193    OH_AVCodec *videoDec = nullptr;
194    // Decoder synchronization lock.
195    std::shared_mutex codecMutex;
196    // Decoder input queue.
197    CodecBufferQueue inQueue;
198    // Decoder output queue.
199    CodecBufferQueue outQueue;
200    ```
201
202### Surface Mode
203
204The following walks you through how to implement the entire video decoding process in surface mode and implement data rotation in asynchronous mode. In this example, an H.264 stream file is input, decoded, and rendered.
205
2061. Add the header files.
207
208    ```c++
209    #include <multimedia/player_framework/native_avcodec_videodecoder.h>
210    #include <multimedia/player_framework/native_avcapability.h>
211    #include <multimedia/player_framework/native_avcodec_base.h>
212    #include <multimedia/player_framework/native_avformat.h>
213    #include <multimedia/player_framework/native_avbuffer.h>
214    #include <fstream>
215    ```
216
2172. Create a decoder instance.
218
219    You can create a decoder by name or MIME type. In the code snippet below, the following variables are used:
220
221    - **videoDec**: pointer to the video decoder instance.
222    - **capability**: pointer to the decoder's capability.
223    - **OH_AVCODEC_MIMETYPE_VIDEO_AVC**: AVC video codec.
224
225    ```c++
226    // To create a decoder by name, call OH_AVCapability_GetName to obtain the codec names available and then call OH_VideoDecoder_CreateByName. If your application has special requirements, for example, expecting a decoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first.
227    OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false);
228    // Create a hardware decoder instance.
229    OH_AVCapability *capability= OH_AVCodec_GetCapabilityByCategory(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false, HARDWARE);
230    const char *name = OH_AVCapability_GetName(capability);
231    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByName(name);
232    ```
233
234    ```c++
235    // Create a decoder by MIME type. Only specific codecs recommended by the system can be created in this way.
236    // If multiple codecs need to be created, create hardware decoder instances first. If the hardware resources are insufficient, create software decoder instances.
237    // Create an H.264 decoder for software/hardware decoding.
238    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC);
239    // Create an H.265 decoder for software/hardware decoding.
240    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_HEVC);
241    ```
242
2433. Call **OH_VideoDecoder_RegisterCallback()** to register the callback functions.
244
245    Register the **OH_AVCodecCallback** struct that defines the following callback function pointers:
246
247    - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror).
248    - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, stream width or height change.
249    - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required, which means that the decoder is ready for receiving data.
250    - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that decoding is complete.
251
252    You need to process the callback functions to ensure that the decoder runs properly.
253
254    <!--RP2--><!--RP2End-->
255
256    ```c++
257    // Implement the OH_AVCodecOnError callback function.
258    static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData)
259    {
260        // Process the error code in the callback.
261        (void)codec;
262        (void)errorCode;
263        (void)userData;
264    }
265
266    // Implement the OH_AVCodecOnStreamChanged callback function.
267    static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData)
268    {
269        // The changed video width and height can be obtained through format.
270        (void)codec;
271        (void)userData;
272        bool ret = OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_WIDTH, &width) &&
273                   OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_HEIGHT, &height);
274        if (!ret) {
275         	// Handle exceptions.
276        }
277    }
278
279    // Implement the OH_AVCodecOnNeedInputBuffer callback function.
280    static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
281    {
282        // The data buffer of the input frame and its index are sent to inQueue.
283        (void)codec;
284        (void)userData;
285        inQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
286    }
287
288    // Implement the OH_AVCodecOnNewOutputBuffer callback function.
289    static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
290    {
291        // The data buffer of the finished frame and its index are sent to outQueue.
292        (void)codec;
293        (void)userData;
294        outQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
295    }
296    // Call OH_VideoDecoder_RegisterCallback() to register the callback functions.
297    OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer};
298    // Set the asynchronous callbacks.
299    OH_AVErrCode ret = OH_VideoDecoder_RegisterCallback(videoDec, cb, nullptr); // nullptr: userData on which the callback depends is empty.
300    if (ret != AV_ERR_OK) {
301        // Handle exceptions.
302    }
303    ```
304
305    > **NOTE**
306    >
307    > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue.
308    >
309    > During video playback, if the SPS of the video stream contains color information, the decoder will return the information (RangeFlag, ColorPrimary, MatrixCoefficient, and TransferCharacteristic) through the **OH_AVFormat** parameter in the **OH_AVCodecOnStreamChanged** callback.
310    >
311    > In surface mode of video decoding, the internal data is processed by using High Efficiency Bandwidth Compression (HEBC) by default, and the values of **widthStride** and **heightStride** cannot be obtained.
312    >
313
3144. (Optional) Call **OH_VideoDecoder_SetDecryptionConfig** to set the decryption configuration. Call this API after the media key system information is obtained and a media key is obtained but before **Prepare()** is called. For details about how to obtain such information, see step 4 in [Media Data Demultiplexing](audio-video-demuxer.md).  In surface mode, the DRM decryption capability supports both secure and non-secure video channels. For details about DRM APIs, see [DRM](../../reference/apis-drm-kit/capi-drm.md).
315
316    Add the header files.
317
318    ```c++
319    #include <multimedia/drm_framework/native_mediakeysystem.h>
320    #include <multimedia/drm_framework/native_mediakeysession.h>
321    #include <multimedia/drm_framework/native_drm_err.h>
322    #include <multimedia/drm_framework/native_drm_common.h>
323    ```
324
325    Link the dynamic libraries in the CMake script.
326
327    ``` cmake
328    target_link_libraries(sample PUBLIC libnative_drm.so)
329    ```
330
331    <!--RP4-->The sample code is as follows:<!--RP4End-->
332
333    ```c++
334    // Create a media key system based on the media key system information. The following uses com.clearplay.drm as an example.
335    MediaKeySystem *system = nullptr;
336    int32_t ret = OH_MediaKeySystem_Create("com.clearplay.drm", &system);
337    if (system == nullptr) {
338        printf("create media key system failed");
339        return;
340    }
341
342    // Create a decryption session. If a secure video channel is used, create a MediaKeySession with the content protection level of CONTENT_PROTECTION_LEVEL_HW_CRYPTO or higher.
343    // To use a non-secure video channel, create a MediaKeySession with the content protection level of CONTENT_PROTECTION_LEVEL_SW_CRYPTO or higher.
344    MediaKeySession *session = nullptr;
345    DRM_ContentProtectionLevel contentProtectionLevel = CONTENT_PROTECTION_LEVEL_SW_CRYPTO;
346    ret = OH_MediaKeySystem_CreateMediaKeySession(system, &contentProtectionLevel, &session);
347    if (ret != DRM_OK) {
348        // If the creation fails, refer to the DRM interface document and check logs.
349        printf("create media key session failed.");
350        return;
351    }
352    if (session == nullptr) {
353        printf("media key session is nullptr.");
354        return;
355    }
356
357    // Generate a media key request and set the response to the media key request.
358
359    // Set the decryption configuration, that is, set the decryption session and secure video channel flag to the decoder.
360    // If the DRM scheme supports a secure video channel, set secureVideoPath to true and create a secure decoder before using the channel.
361    // That is, in step 2, call OH_VideoDecoder_CreateByName, with a decoder name followed by .secure (for example, [CodecName].secure) passed in, to create a secure decoder.
362    bool secureVideoPath = false;
363    ret = OH_VideoDecoder_SetDecryptionConfig(videoDec, session, secureVideoPath);
364    ```
365
3665. Call **OH_VideoDecoder_Configure()** to configure the decoder.
367
368    For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs).
369
370    For details about the parameter verification rules, see [OH_VideoDecoder_Configure()](../../reference/apis-avcodec-kit/_video_decoder.md#oh_videodecoder_configure).
371
372    The parameter value ranges can be obtained through the capability query interface. For details, see [Obtaining Supported Codecs](obtain-supported-codecs.md).
373
374    Currently, the following options must be configured for all supported formats: video frame width and height.
375
376    ```c++
377
378    auto format = std::shared_ptr<OH_AVFormat>(OH_AVFormat_Create(), OH_AVFormat_Destroy);
379    if (format == nullptr) {
380        // Handle exceptions.
381    }
382    // Set the format.
383    OH_AVFormat_SetIntValue(format.get(), OH_MD_KEY_WIDTH, width); // Mandatory.
384    OH_AVFormat_SetIntValue(format.get(), OH_MD_KEY_HEIGHT, height); // Mandatory.
385    OH_AVFormat_SetIntValue(format.get(), OH_MD_KEY_PIXEL_FORMAT, pixelFormat);
386    // (Optional) Configure low-latency decoding.
387    // If supported by the platform, the video decoder outputs frames in the decoding sequence when OH_MD_KEY_VIDEO_ENABLE_LOW_LATENCY is enabled.
388    OH_AVFormat_SetIntValue(format.get(), OH_MD_KEY_VIDEO_ENABLE_LOW_LATENCY, 1);
389    // Configure the decoder.
390    OH_AVErrCode ret = OH_VideoDecoder_Configure(videoDec, format.get());
391    if (ret != AV_ERR_OK) {
392        // Handle exceptions.
393    }
394    ```
395
3966. Set the surface.
397
398    You can obtain NativeWindow in either of the following ways:
399
400    6.1 If the image is directly displayed after being decoded, obtain NativeWindow from the **XComponent**.
401
402    Add the header files.
403
404    ```c++
405    #include <native_window/external_window.h>
406    ```
407
408    Link the dynamic libraries in the CMake script.
409
410    ``` cmake
411    target_link_libraries(sample PUBLIC libnative_window.so)
412    ```
413
414    6.1.1 On ArkTS, call **getXComponentSurfaceId** of the xComponentController to obtain the surface ID of the XComponent. For details about the operation, see [Custom Rendering (XComponent)](../../ui/napi-xcomponent-guidelines.md).
415
416    6.1.2 On the native side, call **OH_NativeWindow_CreateNativeWindowFromSurfaceId** to create a NativeWindow instance.
417
418    ```c++
419    OHNativeWindow* nativeWindow;
420    // Create a NativeWindow instance based on the surface ID obtained in step 1.1.
421    OH_NativeWindow_CreateNativeWindowFromSurfaceId(surfaceId, &nativeWindow);
422    ```
423
424    6.2 If OpenGL post-processing is performed after decoding, obtain NativeWindow from NativeImage. For details about the operation, see [NativeImage](../../graphics/native-image-guidelines.md).
425
426    You perform this step during decoding, that is, dynamically switch the surface.
427
428    ```c++
429    // Set the surface.
430    // Set the window parameters.
431    OH_AVErrCode ret = OH_VideoDecoder_SetSurface(videoDec, nativeWindow);  // Obtain a NativeWindow instance using either of the preceding methods.
432    if (ret != AV_ERR_OK) {
433        // Handle exceptions.
434    }
435    // Configure the matching mode between the video and screen. (Scale the buffer at the original aspect ratio to enable the smaller side of the buffer to match the window, while making the excess part transparent.)
436    OH_NativeWindow_NativeWindowSetScalingModeV2(nativeWindow, OH_SCALING_MODE_SCALE_CROP_V2);
437    ```
438
439    > **NOTE**
440    >
441    > If both decoder 1 and decoder 2 are bound to the same NativeWindow using the **OH_VideoDecoder_SetSurface** function, and decoder 2 is running, destroying decoder 1 with **OH_VideoDecoder_Destroy** will cause decoder 2's video playback to freeze.
442    >
443    > Consider the following approaches:
444    > 1. Wait for decoder 1 to be fully released before starting decoder 2 with **OH_VideoDecoder_Start**.
445    > 2. Use surface 1 for decoder 1, and create a temporary surface for decoder 2 using **OH_ConsumerSurface_Create**. Once decoder 1 is released, bind decoder 2 to surface 1 using **OH_VideoDecoder_SetSurface**. For details, see [Concurrently Creating a Video Decoder and Initializing NativeWindow](../../media/avcodec/parallel-decoding-nativeWindow.md).
446
447
4487. Call **OH_VideoDecoder_Prepare()** to prepare internal resources for the decoder.
449
450   ```c++
451OH_AVErrCode ret = OH_VideoDecoder_Prepare(videoDec);
452    if (ret != AV_ERR_OK) {
453        // Handle exceptions.
454    }
455    ```
456
4578. Call **OH_VideoDecoder_Start()** to start the decoder.
458
459    ```c++
460    // Start the decoder.
461    OH_AVErrCode ret = OH_VideoDecoder_Start(videoDec);
462    if (ret != AV_ERR_OK) {
463        // Handle exceptions.
464    }
465    ```
466
4679. (Optional) Call **OH_VideoDecoder_SetParameter()** to set the surface parameters of the decoder.
468
469    For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs).
470
471    ```c++
472    auto format = std::shared_ptr<OH_AVFormat>(OH_AVFormat_Create(), OH_AVFormat_Destroy);
473    if (format == nullptr) {
474        // Handle exceptions.
475    }
476    // Configure the display rotation angle.
477    OH_AVFormat_SetIntValue(format.get(), OH_MD_KEY_ROTATION, 90);
478    OH_AVErrCode ret = OH_VideoDecoder_SetParameter(videoDec, format.get());
479    if (ret != AV_ERR_OK) {
480        // Handle exceptions.
481    }
482    ```
483
48410. (Optional) Call **OH_AVCencInfo_SetAVBuffer()** to set the Common Encryption Scheme (CENC) information.
485
486    If the program to play is DRM encrypted and the application implements media demultiplexing instead of using the system's [demuxer](audio-video-demuxer.md), you must call **OH_AVCencInfo_SetAVBuffer()** to set the CENC information to the AVBuffer. In this way, the AVBuffer carries the data to be decrypted and CENC information, so that the media data in the AVBuffer can be decrypted. You do not need to call this API when the application uses the system's [demuxer](audio-video-demuxer.md).
487
488    Add the header files.
489
490    ```c++
491    #include <multimedia/player_framework/native_cencinfo.h>
492    ```
493
494    Link the dynamic libraries in the CMake script.
495
496    ``` cmake
497    target_link_libraries(sample PUBLIC libnative_media_avcencinfo.so)
498    ```
499
500    In the code snippet below, the following variable is used:
501    - **buffer**: parameter passed by the callback function **OnNeedInputBuffer**.
502    ```c++
503    uint32_t keyIdLen = DRM_KEY_ID_SIZE;
504    uint8_t keyId[] = {
505        0xd4, 0xb2, 0x01, 0xe4, 0x61, 0xc8, 0x98, 0x96,
506        0xcf, 0x05, 0x22, 0x39, 0x8d, 0x09, 0xe6, 0x28};
507    uint32_t ivLen = DRM_KEY_IV_SIZE;
508    uint8_t iv[] = {
509        0xbf, 0x77, 0xed, 0x51, 0x81, 0xde, 0x36, 0x3e,
510        0x52, 0xf7, 0x20, 0x4f, 0x72, 0x14, 0xa3, 0x95};
511    uint32_t encryptedBlockCount = 0;
512    uint32_t skippedBlockCount = 0;
513    uint32_t firstEncryptedOffset = 0;
514    uint32_t subsampleCount = 1;
515    DrmSubsample subsamples[1] = { {0x10, 0x16} };
516    // Create a CencInfo instance.
517    OH_AVCencInfo *cencInfo = OH_AVCencInfo_Create();
518    if (cencInfo == nullptr) {
519        // Handle exceptions.
520    }
521    // Set the decryption algorithm.
522    OH_AVErrCode errNo = OH_AVCencInfo_SetAlgorithm(cencInfo, DRM_ALG_CENC_AES_CTR);
523    if (errNo != AV_ERR_OK) {
524        // Handle exceptions.
525    }
526    // Set KeyId and Iv.
527    errNo = OH_AVCencInfo_SetKeyIdAndIv(cencInfo, keyId, keyIdLen, iv, ivLen);
528    if (errNo != AV_ERR_OK) {
529        // Handle exceptions.
530    }
531    // Set the sample information.
532    errNo = OH_AVCencInfo_SetSubsampleInfo(cencInfo, encryptedBlockCount, skippedBlockCount, firstEncryptedOffset,
533        subsampleCount, subsamples);
534    if (errNo != AV_ERR_OK) {
535        // Handle exceptions.
536    }
537    // Set the mode. KeyId, Iv, and SubSamples have been set.
538    errNo = OH_AVCencInfo_SetMode(cencInfo, DRM_CENC_INFO_KEY_IV_SUBSAMPLES_SET);
539    if (errNo != AV_ERR_OK) {
540        // Handle exceptions.
541    }
542    // Set CencInfo to the AVBuffer.
543    errNo = OH_AVCencInfo_SetAVBuffer(cencInfo, buffer);
544    if (errNo != AV_ERR_OK) {
545        // Handle exceptions.
546    }
547    // Destroy the CencInfo instance.
548    errNo = OH_AVCencInfo_Destroy(cencInfo);
549    if (errNo != AV_ERR_OK) {
550        // Handle exceptions.
551    }
552    ```
553
55411. Call **OH_VideoDecoder_PushInputBuffer()** to push the stream to the input buffer for decoding.
555
556    In the code snippet below, the following variables are used:
557    - **size**, **offset**, **pts**, and **frameData**: size, offset, timestamp, and frame data. For details about how to obtain such information, see step 9 in [Media Data Demultiplexing](./audio-video-demuxer.md).
558    - **flags**: type of the buffer flag. For details, see [OH_AVCodecBufferFlags](../../reference/apis-avcodec-kit/_core.md#oh_avcodecbufferflags).
559
560    The member variables of **bufferInfo** are as follows:
561    - **buffer**: parameter passed by the callback function **OnNeedInputBuffer**. You can obtain the virtual address of the input stream by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr).
562    - **index**: parameter passed by the callback function **OnNeedInputBuffer**, which uniquely corresponds to the buffer.
563    - **isValid**: whether the buffer instance stored in **bufferInfo** is valid.
564
565    ```c++
566    std::shared_ptr<CodecBufferInfo> bufferInfo = inQueue.Dequeue();
567    std::shared_lock<std::shared_mutex> lock(codecMutex);
568    if (bufferInfo == nullptr || !bufferInfo->isValid) {
569        // Handle exceptions.
570    }
571    // Write stream data.
572    uint8_t *addr = OH_AVBuffer_GetAddr(bufferInfo->buffer);
573    if (addr == nullptr) {
574       // Handle exceptions.
575    }
576    int32_t capacity = OH_AVBuffer_GetCapacity(bufferInfo->buffer);
577    if (size > capacity) {
578        // Handle exceptions.
579    }
580    memcpy(addr, frameData, size);
581    // Configure the size, offset, and timestamp of the frame data.
582    OH_AVCodecBufferAttr info;
583    info.size = size;
584    info.offset = offset;
585    info.pts = pts;
586    info.flags = flags;
587    // Write the information to the buffer.
588    OH_AVErrCode setBufferRet = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info);
589    if (setBufferRet != AV_ERR_OK) {
590        // Handle exceptions.
591    }
592    // Send the data to the input buffer for decoding.
593    OH_AVErrCode pushInputRet = OH_VideoDecoder_PushInputBuffer(videoDec, bufferInfo->index);
594    if (pushInputRet != AV_ERR_OK) {
595        // Handle exceptions.
596    }
597    ```
598
59912. Call **OH_VideoDecoder_RenderOutputBuffer()** or **OH_VideoDecoder_RenderOutputBufferAtTime()** to render the data and free the output buffer, or call **OH_VideoDecoder_FreeOutputBuffer()** to directly free the output buffer.
600
601    In the following example, the member variables of **bufferInfo** are as follows:
602    - **index**: parameter passed by the callback function **OnNewOutputBuffer**, which uniquely corresponds to the buffer.
603    - **buffer**: parameter passed by the callback function **OnNewOutputBuffer**. In surface mode, you cannot obtain the virtual address of the image by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr).
604    - **isValid**: whether the buffer instance stored in **bufferInfo** is valid.
605
606    ```c++
607    std::shared_ptr<CodecBufferInfo> bufferInfo = outQueue.Dequeue();
608    std::shared_lock<std::shared_mutex> lock(codecMutex);
609    if (bufferInfo == nullptr || !bufferInfo->isValid) {
610        // Handle exceptions.
611    }
612    // Obtain the decoded information.
613    OH_AVCodecBufferAttr info;
614    OH_AVErrCode getBufferRet = OH_AVBuffer_GetBufferAttr(bufferInfo->buffer, &info);
615    if (getBufferRet != AV_ERR_OK) {
616        // Handle exceptions.
617    }
618    // You can determine the value.
619    bool isRender;
620    bool isNeedRenderAtTime;
621    OH_AVErrCode ret = AV_ERR_OK;
622    if (isRender) {
623        // Render the data and free the output buffer. index is the index of the buffer.
624        if (isNeedRenderAtTime){
625            // Obtain the system absolute time, and call renderTimestamp to display the time based on service requirements.
626            int64_t renderTimestamp =
627                std::chrono::duration_cast<std::chrono::nanoseconds>(std::chrono::high_resolution_clock::now().time_since_epoch()).count();
628            ret = OH_VideoDecoder_RenderOutputBufferAtTime(videoDec, bufferInfo->index, renderTimestamp);
629        } else {
630            ret = OH_VideoDecoder_RenderOutputBuffer(videoDec, bufferInfo->index);
631        }
632
633    } else {
634        // Free the output buffer.
635        ret = OH_VideoDecoder_FreeOutputBuffer(videoDec, bufferInfo->index);
636    }
637    if (ret != AV_ERR_OK) {
638        // Handle exceptions.
639    }
640
641    ```
642
643    > **NOTE**
644    >
645    > To obtain the buffer attributes, such as **pixel_format** and **stride**, call [OH_NativeWindow_NativeWindowHandleOpt](../../reference/apis-arkgraphics2d/capi-external-window-h.md#oh_nativewindow_nativewindowhandleopt).
646    >
647    > To render the data and free the output buffer, you are advised to call [OH_VideoDecoder_RenderOutputBufferAtTime](../../reference/apis-avcodec-kit/_video_decoder.md#oh_videodecoder_renderoutputbufferattime) first.
648
64913. (Optional) Call **OH_VideoDecoder_Flush()** to refresh the decoder.
650
651    After **OH_VideoDecoder_Flush** is called, the decoder remains in the Running state, but the input and output data and parameter set (such as the H.264 PPS/SPS) buffered in the decoder are cleared.
652
653    To continue decoding, you must call **OH_VideoDecoder_Start** again.
654
655    In the code snippet below, the following variables are used:
656
657    - **xpsData** and **xpsSize**: PPS/SPS information. For details about how to obtain such information, see [Media Data Demultiplexing](./audio-video-demuxer.md).
658
659    ```c++
660    std::unique_lock<std::shared_mutex> lock(codecMutex);
661    // Refresh the decoder.
662    OH_AVErrCode flushRet = OH_VideoDecoder_Flush(videoDec);
663    if (flushRet != AV_ERR_OK) {
664        // Handle exceptions.
665    }
666    inQueue.Flush();
667    outQueue.Flush();
668    // Start decoding again.
669    OH_AVErrCode startRet = OH_VideoDecoder_Start(videoDec);
670    if (startRet != AV_ERR_OK) {
671        // Handle exceptions.
672    }
673
674    std::shared_ptr<CodecBufferInfo> bufferInfo = inQueue.Dequeue();
675    if (bufferInfo == nullptr || !bufferInfo->isValid) {
676        // Handle exceptions.
677    }
678    // Retransfer PPS/SPS.
679    // Configure the frame PPS/SPS information.
680    uint8_t *addr = OH_AVBuffer_GetAddr(bufferInfo->buffer);
681    if (addr == nullptr) {
682       // Handle exceptions.
683    }
684    int32_t capacity = OH_AVBuffer_GetCapacity(bufferInfo->buffer);
685    if (xpsSize > capacity) {
686        // Handle exceptions.
687    }
688    memcpy(addr, xpsData, xpsSize);
689    OH_AVCodecBufferAttr info;
690    info.flags = AVCODEC_BUFFER_FLAG_CODEC_DATA;
691    // Write the information to the buffer.
692    OH_AVErrCode setBufferRet = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info);
693    if (setBufferRet != AV_ERR_OK) {
694        // Handle exceptions.
695    }
696    // Push the frame data to the decoder. index is the index of the corresponding queue.
697    OH_AVErrCode pushInputRet = OH_VideoDecoder_PushInputBuffer(videoDec, bufferInfo->index);
698    if (pushInputRet != AV_ERR_OK) {
699        // Handle exceptions.
700    }
701
702    ```
703
704    > **NOTE**
705    > When **OH_VideoDecoder_Start** is called again after the flush operation, the PPS/SPS must be retransferred.
706    >
707
70814. (Optional) Call **OH_VideoDecoder_Reset()** to reset the decoder.
709
710    After **OH_VideoDecoder_Reset** is called, the decoder returns to the Initialized state. To continue decoding, you must call **OH_VideoDecoder_Configure**, **OH_VideoDecoder_SetSurface**, and **OH_VideoDecoder_Prepare** in sequence.
711
712    ```c++
713    std::unique_lock<std::shared_mutex> lock(codecMutex);
714    // Reset the decoder.
715    OH_AVErrCode resetRet = OH_VideoDecoder_Reset(videoDec);
716    if (resetRet != AV_ERR_OK) {
717        // Handle exceptions.
718    }
719    inQueue.Flush();
720    outQueue.Flush();
721    // Reconfigure the decoder.
722    auto format = std::shared_ptr<OH_AVFormat>(OH_AVFormat_Create(), OH_AVFormat_Destroy);
723    if (format == nullptr) {
724        // Handle exceptions.
725    }
726    OH_AVErrCode configRet = OH_VideoDecoder_Configure(videoDec, format.get());
727    if (configRet != AV_ERR_OK) {
728        // Handle exceptions.
729    }
730    // Reconfigure the surface in surface mode. This is not required in buffer mode.
731    OH_AVErrCode setRet = OH_VideoDecoder_SetSurface(videoDec, nativeWindow);
732    if (setRet != AV_ERR_OK) {
733        // Handle exceptions.
734    }
735    // The decoder is ready again.
736    OH_AVErrCode prepareRet = OH_VideoDecoder_Prepare(videoDec);
737    if (prepareRet != AV_ERR_OK) {
738        // Handle exceptions.
739    }
740    ```
741
74215. (Optional) Call **OH_VideoDecoder_Stop()** to stop the decoder.
743
744    After **OH_VideoDecoder_Stop()** is called, the decoder retains the decoding instance and releases the input and output buffers. You can directly call **OH_VideoDecoder_Start** to continue decoding. The first input buffer must carry the parameter set, starting from the IDR frame.
745
746    ```c++
747    std::unique_lock<std::shared_mutex> lock(codecMutex);
748    // Stop the decoder.
749    OH_AVErrCode ret = OH_VideoDecoder_Stop(videoDec);
750    if (ret != AV_ERR_OK) {
751        // Handle exceptions.
752    }
753    inQueue.Flush();
754    outQueue.Flush();
755    ```
756
75716. Call **OH_VideoDecoder_Destroy()** to destroy the decoder instance and release resources.
758
759    > **NOTE**
760    >
761    > This API cannot be called in the callback function.
762    >
763    > After the call, you must set a null pointer to the decoder to prevent program errors caused by wild pointers.
764
765    ```c++
766    std::unique_lock<std::shared_mutex> lock(codecMutex);
767    // Release the nativeWindow instance.
768    if(nativeWindow != nullptr){
769        OH_NativeWindow_DestroyNativeWindow(nativeWindow);
770        nativeWindow = nullptr;
771    }
772    // Call OH_VideoDecoder_Destroy to destroy the decoder.
773    OH_AVErrCode ret = AV_ERR_OK;
774    if (videoDec != nullptr) {
775        ret = OH_VideoDecoder_Destroy(videoDec);
776        videoDec = nullptr;
777    }
778    if (ret != AV_ERR_OK) {
779        // Handle exceptions.
780    }
781    inQueue.Flush();
782    outQueue.Flush();
783    ```
784
785### Buffer Mode
786
787The following walks you through how to implement the entire video decoding process in buffer mode and implement data rotation in asynchronous mode. In this example, an H.264 stream file is input and decoded into a YUV file.
788
789
7901. Add the header files.
791
792    ```c++
793    #include <multimedia/player_framework/native_avcodec_videodecoder.h>
794    #include <multimedia/player_framework/native_avcapability.h>
795    #include <multimedia/player_framework/native_avcodec_base.h>
796    #include <multimedia/player_framework/native_avformat.h>
797    #include <multimedia/player_framework/native_avbuffer.h>
798    #include <native_buffer/native_buffer.h>
799    #include <fstream>
800    ```
801
8022. Create a decoder instance.
803
804    The procedure is the same as that in surface mode and is not described here.
805
806    ```c++
807    // To create a decoder by name, call OH_AVCapability_GetName to obtain the codec names available and then call OH_VideoDecoder_CreateByName. If your application has special requirements, for example, expecting a decoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first.
808    OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false);
809    const char *name = OH_AVCapability_GetName(capability);
810    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByName(name);
811    ```
812
813    ```c++
814    // Create a decoder by MIME type. Only specific codecs recommended by the system can be created in this way.
815    // If multiple codecs need to be created, create hardware decoder instances first. If the hardware resources are insufficient, create software decoder instances.
816    // Create an H.264 decoder for software/hardware decoding.
817    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC);
818    // Create an H.265 decoder for hardware decoding.
819    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_HEVC);
820    ```
821
8223. Call **OH_VideoDecoder_RegisterCallback()** to register the callback functions.
823
824    Register the **OH_AVCodecCallback** struct that defines the following callback function pointers:
825
826    - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror).
827    - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, stream width or height change.
828    - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required, which means that the decoder is ready for receiving data.
829    - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that decoding is complete.
830
831    You need to process the callback functions to ensure that the decoder runs properly.
832
833    <!--RP2--><!--RP2End-->
834
835    ```c++
836    int32_t cropTop = 0;
837    int32_t cropBottom = 0;
838    int32_t cropLeft = 0;
839    int32_t cropRight = 0;
840    bool isFirstFrame = true;
841    // Implement the OH_AVCodecOnError callback function.
842    static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData)
843    {
844        // Process the error code in the callback.
845        (void)codec;
846        (void)errorCode;
847        (void)userData;
848    }
849
850    // Implement the OH_AVCodecOnStreamChanged callback function.
851    static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData)
852    {
853        // Optional. Configure the data when you want to obtain the video width, height, and stride.
854        // The changed video width, height, and stride can be obtained through format.
855        (void)codec;
856        (void)userData;
857        bool ret = OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_WIDTH, &width) &&
858                   OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_HEIGHT, &height) &&
859                   OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_STRIDE, &widthStride) &&
860                   OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_SLICE_HEIGHT, &heightStride) &&
861                   // (Optional) Obtain the cropped rectangle information.
862                   OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_TOP, &cropTop) &&
863                   OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_BOTTOM, &cropBottom) &&
864                   OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_LEFT, &cropLeft) &&
865                   OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_RIGHT, &cropRight);
866        if (!ret) {
867         	// Handle exceptions.
868        }
869    }
870
871    // Implement the OH_AVCodecOnNeedInputBuffer callback function.
872    static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
873    {
874        // The data buffer of the input frame and its index are sent to inQueue.
875        (void)codec;
876        (void)userData;
877        inQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
878    }
879
880    // Implement the OH_AVCodecOnNewOutputBuffer callback function.
881    static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
882    {
883        // Optional. Configure the data when you want to obtain the video width, height, and stride.
884        // Obtain the video width, height, and stride.
885        if (isFirstFrame) {
886            auto format = std::shared_ptr<OH_AVFormat>(OH_VideoDecoder_GetOutputDescription(codec), OH_AVFormat_Destroy);
887            if (format == nullptr) {
888                // Handle exceptions.
889            }
890            bool ret = OH_AVFormat_GetIntValue(format.get(), OH_MD_KEY_VIDEO_PIC_WIDTH, &width) &&
891                       OH_AVFormat_GetIntValue(format.get(), OH_MD_KEY_VIDEO_PIC_HEIGHT, &height) &&
892                       OH_AVFormat_GetIntValue(format.get(), OH_MD_KEY_VIDEO_STRIDE, &widthStride) &&
893                       OH_AVFormat_GetIntValue(format.get(), OH_MD_KEY_VIDEO_SLICE_HEIGHT, &heightStride) &&
894                       // (Optional) Obtain the cropped rectangle information.
895                       OH_AVFormat_GetIntValue(format.get(), OH_MD_KEY_VIDEO_CROP_TOP, &cropTop) &&
896                       OH_AVFormat_GetIntValue(format.get(), OH_MD_KEY_VIDEO_CROP_BOTTOM, &cropBottom) &&
897                       OH_AVFormat_GetIntValue(format.get(), OH_MD_KEY_VIDEO_CROP_LEFT, &cropLeft) &&
898                       OH_AVFormat_GetIntValue(format.get(), OH_MD_KEY_VIDEO_CROP_RIGHT, &cropRight);
899            if (!ret) {
900             	// Handle exceptions.
901            }
902            isFirstFrame = false;
903        }
904        // The data buffer of the finished frame and its index are sent to outQueue.
905        (void)userData;
906        outQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
907    }
908    // Call OH_VideoDecoder_RegisterCallback() to register the callback functions.
909    OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer};
910    // Set the asynchronous callbacks.
911    OH_AVErrCode ret = OH_VideoDecoder_RegisterCallback(videoDec, cb, nullptr); // nullptr: userData on which the callback depends is empty.
912    if (ret != AV_ERR_OK) {
913        // Handle exceptions.
914    }
915    ```
916
917    > **NOTE**
918    >
919    > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue.
920    >
921
9224. (Optional) Call **OH_VideoDecoder_SetDecryptionConfig** to set the decryption configuration. Call this API after the media key system information is obtained and a media key is obtained but before **Prepare()** is called. For details about how to obtain such information, see step 4 in [Media Data Demultiplexing](audio-video-demuxer.md).  In buffer mode, the DRM decryption capability supports only non-secure video channels. For details about DRM APIs, see [DRM](../../reference/apis-drm-kit/capi-drm.md).
923
924    Add the header files.
925
926    ```c++
927    #include <multimedia/drm_framework/native_mediakeysystem.h>
928    #include <multimedia/drm_framework/native_mediakeysession.h>
929    #include <multimedia/drm_framework/native_drm_err.h>
930    #include <multimedia/drm_framework/native_drm_common.h>
931    ```
932
933    Link the dynamic libraries in the CMake script.
934
935    ``` cmake
936    target_link_libraries(sample PUBLIC libnative_drm.so)
937    ```
938
939    The sample code is as follows:
940
941    ```c++
942    // Create a media key system based on the media key system information. The following uses com.clearplay.drm as an example.
943    MediaKeySystem *system = nullptr;
944    int32_t ret = OH_MediaKeySystem_Create("com.clearplay.drm", &system);
945    if (system == nullptr) {
946        printf("create media key system failed");
947        return;
948    }
949
950    // Create a media key session.
951    // To use a non-secure video channel, create a MediaKeySession with the content protection level of CONTENT_PROTECTION_LEVEL_SW_CRYPTO or higher.
952    MediaKeySession *session = nullptr;
953    DRM_ContentProtectionLevel contentProtectionLevel = CONTENT_PROTECTION_LEVEL_SW_CRYPTO;
954    ret = OH_MediaKeySystem_CreateMediaKeySession(system, &contentProtectionLevel, &session);
955    if (ret != DRM_OK) {
956        // If the creation fails, refer to the DRM interface document and check logs.
957        printf("create media key session failed.");
958        return;
959    }
960    if (session == nullptr) {
961        printf("media key session is nullptr.");
962        return;
963    }
964    // Generate a media key request and set the response to the media key request.
965    // Set the decryption configuration, that is, set the decryption session and secure video channel flag to the decoder.
966    bool secureVideoPath = false;
967    ret = OH_VideoDecoder_SetDecryptionConfig(videoDec, session, secureVideoPath);
968    ```
969
9705. Call **OH_VideoDecoder_Configure()** to configure the decoder.
971
972    The procedure is the same as that in surface mode and is not described here.
973
974    ```c++
975    auto format = std::shared_ptr<OH_AVFormat>(OH_AVFormat_Create(), OH_AVFormat_Destroy);
976    if (format == nullptr) {
977        // Handle exceptions.
978    }
979    // Set the format.
980    OH_AVFormat_SetIntValue(format.get(), OH_MD_KEY_WIDTH, width); // Mandatory.
981    OH_AVFormat_SetIntValue(format.get(), OH_MD_KEY_HEIGHT, height); // Mandatory.
982    OH_AVFormat_SetIntValue(format.get(), OH_MD_KEY_PIXEL_FORMAT, pixelFormat);
983    // Configure the decoder.
984    OH_AVErrCode ret = OH_VideoDecoder_Configure(videoDec, format.get());
985    if (ret != AV_ERR_OK) {
986        // Handle exceptions.
987    }
988    ```
989
9906. Call **OH_VideoDecoder_Prepare()** to prepare internal resources for the decoder.
991
992   ```c++
993OH_AVErrCode ret = OH_VideoDecoder_Prepare(videoDec);
994    if (ret != AV_ERR_OK) {
995        // Handle exceptions.
996    }
997    ```
998
9997. Call **OH_VideoDecoder_Start()** to start the decoder.
1000
1001    ```c++
1002    std::unique_ptr<std::ofstream> outputFile = std::make_unique<std::ofstream>();
1003    if (outputFile != nullptr) {
1004        outputFile->open("/*yourpath*.yuv", std::ios::out | std::ios::binary | std::ios::ate);
1005    }
1006    // Start the decoder.
1007    OH_AVErrCode ret = OH_VideoDecoder_Start(videoDec);
1008    if (ret != AV_ERR_OK) {
1009        // Handle exceptions.
1010    }
1011    ```
1012
10138. (Optional) Call **OH_VideoDecoder_SetParameter()** to set the decoder parameters.
1014
1015    For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs).
1016
1017    ```c++
1018    auto format = std::shared_ptr<OH_AVFormat>(OH_AVFormat_Create(), OH_AVFormat_Destroy);
1019    if (format == nullptr) {
1020        // Handle exceptions.
1021    }
1022    // Set the frame rate.
1023    OH_AVFormat_SetDoubleValue(format.get(), OH_MD_KEY_FRAME_RATE, 30.0);
1024    OH_AVErrCode ret = OH_VideoDecoder_SetParameter(videoDec, format.get());
1025    if (ret != AV_ERR_OK) {
1026        // Handle exceptions.
1027    }
1028    ```
1029
10309. (Optional) Call **OH_AVCencInfo_SetAVBuffer()** to set the CENC information.
1031
1032    The procedure is the same as that in surface mode and is not described here.
1033
1034    The sample code is as follows:
1035
1036    ```c++
1037    uint32_t keyIdLen = DRM_KEY_ID_SIZE;
1038    uint8_t keyId[] = {
1039        0xd4, 0xb2, 0x01, 0xe4, 0x61, 0xc8, 0x98, 0x96,
1040        0xcf, 0x05, 0x22, 0x39, 0x8d, 0x09, 0xe6, 0x28};
1041    uint32_t ivLen = DRM_KEY_IV_SIZE;
1042    uint8_t iv[] = {
1043        0xbf, 0x77, 0xed, 0x51, 0x81, 0xde, 0x36, 0x3e,
1044        0x52, 0xf7, 0x20, 0x4f, 0x72, 0x14, 0xa3, 0x95};
1045    uint32_t encryptedBlockCount = 0;
1046    uint32_t skippedBlockCount = 0;
1047    uint32_t firstEncryptedOffset = 0;
1048    uint32_t subsampleCount = 1;
1049    DrmSubsample subsamples[1] = { {0x10, 0x16} };
1050    // Create a CencInfo instance.
1051    OH_AVCencInfo *cencInfo = OH_AVCencInfo_Create();
1052    if (cencInfo == nullptr) {
1053        // Handle exceptions.
1054    }
1055    // Set the decryption algorithm.
1056    OH_AVErrCode errNo = OH_AVCencInfo_SetAlgorithm(cencInfo, DRM_ALG_CENC_AES_CTR);
1057    if (errNo != AV_ERR_OK) {
1058        // Handle exceptions.
1059    }
1060    // Set KeyId and Iv.
1061    errNo = OH_AVCencInfo_SetKeyIdAndIv(cencInfo, keyId, keyIdLen, iv, ivLen);
1062    if (errNo != AV_ERR_OK) {
1063        // Handle exceptions.
1064    }
1065    // Set the sample information.
1066    errNo = OH_AVCencInfo_SetSubsampleInfo(cencInfo, encryptedBlockCount, skippedBlockCount, firstEncryptedOffset,
1067        subsampleCount, subsamples);
1068    if (errNo != AV_ERR_OK) {
1069        // Handle exceptions.
1070    }
1071    // Set the mode. KeyId, Iv, and SubSamples have been set.
1072    errNo = OH_AVCencInfo_SetMode(cencInfo, DRM_CENC_INFO_KEY_IV_SUBSAMPLES_SET);
1073    if (errNo != AV_ERR_OK) {
1074        // Handle exceptions.
1075    }
1076    // Set CencInfo to the AVBuffer.
1077    errNo = OH_AVCencInfo_SetAVBuffer(cencInfo, buffer);
1078    if (errNo != AV_ERR_OK) {
1079        // Handle exceptions.
1080    }
1081    // Destroy the CencInfo instance.
1082    errNo = OH_AVCencInfo_Destroy(cencInfo);
1083    if (errNo != AV_ERR_OK) {
1084        // Handle exceptions.
1085    }
1086    ```
1087
108810. Call **OH_VideoDecoder_PushInputBuffer()** to push the stream to the input buffer for decoding.
1089
1090    The procedure is the same as that in surface mode and is not described here.
1091
1092    ```c++
1093    std::shared_ptr<CodecBufferInfo> bufferInfo = inQueue.Dequeue();
1094    std::shared_lock<std::shared_mutex> lock(codecMutex);
1095    if (bufferInfo == nullptr || !bufferInfo->isValid) {
1096        // Handle exceptions.
1097    }
1098    // Write stream data.
1099    uint8_t *addr = OH_AVBuffer_GetAddr(bufferInfo->buffer);
1100    if (addr == nullptr) {
1101       // Handle exceptions.
1102    }
1103    int32_t capacity = OH_AVBuffer_GetCapacity(bufferInfo->buffer);
1104    if (size > capacity) {
1105        // Handle exceptions.
1106    }
1107    memcpy(addr, frameData, size);
1108    // Configure the size, offset, and timestamp of the frame data.
1109    OH_AVCodecBufferAttr info;
1110    info.size = size;
1111    info.offset = offset;
1112    info.pts = pts;
1113    info.flags = flags;
1114    // Write the information to the buffer.
1115    OH_AVErrCode setBufferRet = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info);
1116    if (setBufferRet != AV_ERR_OK) {
1117        // Handle exceptions.
1118    }
1119    // Send the data to the input buffer for decoding. index is the index of the buffer.
1120    OH_AVErrCode pushInputRet = OH_VideoDecoder_PushInputBuffer(videoDec, bufferInfo->index);
1121    if (pushInputRet != AV_ERR_OK) {
1122        // Handle exceptions.
1123    }
1124    ```
1125
112611. Call **OH_VideoDecoder_FreeOutputBuffer()** to free the output buffer.
1127
1128    In the following example, the member variables of **bufferInfo** are as follows:
1129
1130    - **index**: parameter passed by the callback function **OnNewOutputBuffer**, which uniquely corresponds to the buffer.
1131    - **buffer**: parameter passed by the callback function **OnNewOutputBuffer**. You can obtain the virtual address of an image by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr).
1132    - **isValid**: whether the buffer instance stored in **bufferInfo** is valid.
1133
1134    ```c++
1135    std::shared_ptr<CodecBufferInfo> bufferInfo = outQueue.Dequeue();
1136    std::shared_lock<std::shared_mutex> lock(codecMutex);
1137    if (bufferInfo == nullptr || !bufferInfo->isValid) {
1138        // Handle exceptions.
1139    }
1140    // Obtain the decoded information.
1141    OH_AVCodecBufferAttr info;
1142    OH_AVErrCode getBufferRet = OH_AVBuffer_GetBufferAttr(bufferInfo->buffer, &info);
1143    if (getBufferRet != AV_ERR_OK) {
1144        // Handle exceptions.
1145    }
1146    // Write the decoded data (specified by data) to the output file.
1147    uint8_t *addr = OH_AVBuffer_GetAddr(bufferInfo->buffer);
1148    if (addr == nullptr) {
1149       // Handle exceptions.
1150    }
1151    if (outputFile != nullptr && outputFile->is_open()) {
1152        outputFile->write(reinterpret_cast<char *>(addr), info.size);
1153    }
1154    // Free the buffer that stores the output data. index is the index of the buffer.
1155    OH_AVErrCode freeOutputRet = OH_VideoDecoder_FreeOutputBuffer(videoDec, bufferInfo->index);
1156    if (freeOutputRet != AV_ERR_OK) {
1157        // Handle exceptions.
1158    }
1159    ```
1160
1161    To copy the Y, U, and V components of an NV12 or NV21 image to another buffer in sequence, perform the following steps (taking an NV12 image as an example),
1162
1163    presenting the image layout of **width**, **height**, **wStride**, and **hStride**.
1164
1165    - **OH_MD_KEY_VIDEO_PIC_WIDTH** corresponds to **width**.
1166    - **OH_MD_KEY_VIDEO_PIC_HEIGHT** corresponds to **height**.
1167    - **OH_MD_KEY_VIDEO_STRIDE** corresponds to **wStride**.
1168    - **OH_MD_KEY_VIDEO_SLICE_HEIGHT** corresponds to **hStride**.
1169
1170    ![copy by line](figures/copy-by-line-decoder.png)
1171
1172    Add the header files.
1173
1174    ```c++
1175    #include <string.h>
1176    ```
1177
1178    The sample code is as follows:
1179
1180    ```c++
1181    // Obtain the width and height of the source buffer by using the callback function OnStreamChanged or OH_VideoDecoder_GetOutputDescription.
1182    struct Rect
1183    {
1184        int32_t width;
1185        int32_t height;
1186    };
1187
1188    struct DstRect // Width stride and height stride of the destination buffer. They are set by the caller.
1189    {
1190        int32_t wStride;
1191        int32_t hStride;
1192    };
1193    // Obtain the width stride and height stride of the source buffer by using the callback function OnStreamChanged or OH_VideoDecoder_GetOutputDescription.
1194    struct SrcRect
1195    {
1196        int32_t wStride;
1197        int32_t hStride;
1198    };
1199
1200    Rect rect = {320, 240};
1201    DstRect dstRect = {320, 240};
1202    SrcRect srcRect = {320, 256};
1203    uint8_t* dst = new uint8_t[dstRect.hStride * dstRect.wStride * 3 / 2]; // Pointer to the target memory area.
1204    uint8_t* src = new uint8_t[srcRect.hStride * srcRect.wStride * 3 / 2]; // Pointer to the source memory area.
1205    uint8_t* dstTemp = dst;
1206    uint8_t* srcTemp = src;
1207    rect.height = ((rect.height + 1) / 2)  * 2 // This ensures the height is always even.
1208    rect.width = ((rect.width + 1) / 2)  * 2 // This ensures the width is always even.
1209
1210    // Y: Copy the source data in the Y region to the target data in another region.
1211    for (int32_t i = 0; i < rect.height; ++i) {
1212        // Copy a row of data from the source to a row of the target.
1213        memcpy(dstTemp, srcTemp, rect.width);
1214        // Update the pointers to the source data and target data to copy the next row. The pointers to the source data and target data are moved downwards by one wStride each time the source data and target data are updated.
1215        dstTemp += dstRect.wStride;
1216        srcTemp += srcRect.wStride;
1217    }
1218    // Padding.
1219    // Update the pointers to the source data and target data. The pointers move downwards by one padding.
1220    dstTemp += (dstRect.hStride - rect.height) * dstRect.wStride;
1221    srcTemp += (srcRect.hStride - rect.height) * srcRect.wStride;
1222    rect.height >>= 1;
1223    // UV: Copy the source data in the UV region to the target data in another region.
1224    for (int32_t i = 0; i < rect.height; ++i) {
1225        memcpy(dstTemp, srcTemp, rect.width);
1226        dstTemp += dstRect.wStride;
1227        srcTemp += srcRect.wStride;
1228    }
1229
1230    delete[] dst;
1231    dst = nullptr;
1232    delete[] src;
1233    src = nullptr;
1234    ```
1235
1236    When processing buffer data (before releasing data) during hardware decoding, the output callback AVBuffer receives the image data after width and height alignment.
1237
1238    Generally, copy the image width, height, stride, and pixel format to ensure correct processing of the decoded data. For details, see step 3 in [Buffer Mode](#buffer-mode).
1239
1240The subsequent processes (including refreshing, resetting, stopping, and destroying the decoder) are the same as those in surface mode. For details, see steps 13–16 in [Surface Mode](#surface-mode).
1241
1242<!--RP5-->
1243<!--RP5End-->
1244
1245 <!--no_check-->