• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Video Decoding
2
3You can call the native APIs provided by the VideoDecoder module to decode video, that is, to decode media data into a YUV file or render it.
4
5<!--RP3--><!--RP3End-->
6
7For details about the supported decoding capabilities, see [AVCodec Supported Formats](avcodec-support-formats.md#video-decoding).
8
9<!--RP1--><!--RP1End-->
10
11Through the VideoDecoder module, your application can implement the following key capabilities.
12
13|          Capability                      |             How to Configure                                                                    |
14| --------------------------------------- | ---------------------------------------------------------------------------------- |
15| Variable resolution        | The decoder supports the change of the input stream resolution. After the resolution is changed, the callback function **OnStreamChanged()** set by **OH_VideoDecoder_RegisterCallback** is triggered. For details, see step 3 in surface mode or step 3 in buffer mode. |
16| Dynamic surface switching | Call **OH_VideoDecoder_SetSurface** to configure this capability. It is supported only in surface mode. For details, see step 6 in surface mode.   |
17| Low-latency decoding | Call **OH_VideoDecoder_Configure** to configure this capability. For details, see step 5 in surface mode or step 5 in buffer mode.     |
18
19## Constraints
20
21- HDR Vivid decoding is not supported in buffer mode.
22- After **flush()**, **reset()**, or **stop()** is called, the PPS/SPS must be transferred again in the **start()** call. For details about the example, see step 13 in [Surface Output](#surface-output).
23- If **flush()**, **reset()**, **stop()**, or **destroy()** is executed in a non-callback thread, the execution result is returned after all callbacks are executed.
24- Due to limited hardware decoder resources, you must call **OH_VideoDecoder_Destroy** to destroy every decoder instance when it is no longer needed.
25- The input streams for video decoding support only the AnnexB format, and the supported AnnexB format supports multiple slices. However, the slices of the same frame must be sent to the decoder at a time.
26- When **flush()**, **reset()**, or **stop()** is called, do not continue to operate the OH_AVBuffer obtained through the previous callback function.
27- The DRM decryption capability supports both non-secure and secure video channels in [surface mode](#surface-output), but only non-secure video channels in buffer mode(#buffer-output).
28- The buffer mode and surface mode use the same APIs. Therefore, the surface mode is described as an example.
29- In buffer mode, after obtaining the pointer to an OH_AVBuffer instance through the callback function **OH_AVCodecOnNewOutputBuffer**, call **OH_VideoDecoder_FreeOutputBuffer** to notify the system that the buffer has been fully utilized. In this way, the system can write the subsequently decoded data to the corresponding location. If the OH_NativeBuffer instance is obtained through **OH_AVBuffer_GetNativeBuffer** and its lifecycle extends beyond that of the OH_AVBuffer pointer instance, you mut perform data duplication. In this case, you should manage the lifecycle of the newly generated OH_NativeBuffer object to ensure that the object can be correctly used and released.
30<!--RP6--><!--RP6End-->
31
32## Surface Output and Buffer Output
33
34- Surface output and buffer output differ in data output modes.
35- They are applicable to different scenarios.
36  - Surface output indicates that the OHNativeWindow is used to transfer output data. It supports connection with other modules, such as the **XComponent**.
37  - Buffer output indicates that decoded data is output in shared memory mode.
38
39- The two also differ slightly in the API calling modes:
40  - In surface mode, the caller can choose to call **OH_VideoDecoder_FreeOutputBuffer** to free the output buffer (without rendering the data). In buffer mode, the caller must call **OH_VideoDecoder_FreeOutputBuffer** to free the output buffer.
41  - In surface mode, the caller must call **OH_VideoDecoder_SetSurface** to set an OHNativeWindow before the decoder is ready and call **OH_VideoDecoder_RenderOutputBuffer** to render the decoded data after the decoder is started.
42  - In buffer mode, an application can obtain the shared memory address and data from the output buffer. In surface mode, an application can obtain the data from the output buffer.
43
44For details about the development procedure, see [Surface Output](#surface-output) and [Buffer Output](#buffer-output).
45
46## State Machine Interaction
47
48The following figure shows the interaction between states.
49
50![Invoking relationship of state](figures/state-invocation.png)
51
521. A decoder enters the Initialized state in either of the following ways:
53   - When a decoder instance is initially created, the decoder enters the Initialized state.
54   - When **OH_VideoDecoder_Reset** is called in any state, the decoder returns to the Initialized state.
55
562. When the decoder is in the Initialized state, you can call **OH_VideoDecoder_Configure** to configure the decoder. After the configuration, the decoder enters the Configured state.
573. When the decoder is in the Configured state, you can call **OH_VideoDecoder_Prepare** to switch it to the Prepared state.
584. When the decoder is in the Prepared state, you can call **OH_VideoDecoder_Start** to switch it to the Executing state.
59   - When the decoder is in the Executing state, you can call **OH_VideoDecoder_Stop** to switch it back to the Prepared state.
60
615. In rare cases, the decoder may encounter an error and enter the Error state. If this is the case, an invalid value can be returned or an exception can be thrown through a queue operation.
62   - When the decoder is in the Error state, you can either call **OH_VideoDecoder_Reset** to switch it to the Initialized state or call **OH_VideoDecoder_Destroy** to switch it to the Released state.
63
646. The Executing state has three substates: Flushed, Running, and End-of-Stream.
65   - After **OH_VideoDecoder_Start** is called, the decoder enters the Running substate immediately.
66   - When the decoder is in the Executing state, you can call **OH_VideoDecoder_Flush** to switch it to the Flushed substate.
67   - After all data to be processed is transferred to the decoder, the [AVCODEC_BUFFER_FLAGS_EOS](../../reference/apis-avcodec-kit/_core.md#oh_avcodecbufferflags-1) flag is added to the last input buffer in the input buffers queue. Once this flag is detected, the decoder transits to the End-of-Stream substate. In this state, the decoder does not accept new inputs, but continues to generate outputs until it reaches the tail frame.
68
697. When the decoder is no longer needed, you must call **OH_VideoDecoder_Destroy** to destroy the decoder instance, which then transitions to the Released state.
70
71## How to Develop
72
73Read [VideoDecoder](../../reference/apis-avcodec-kit/_video_decoder.md) for the API reference.
74
75The figure below shows the call relationship of video decoding.
76
77- The dotted line indicates an optional operation.
78
79- The solid line indicates a mandatory operation.
80
81![Call relationship of video decoding](figures/video-decode.png)
82
83### Linking the Dynamic Link Libraries in the CMake Script
84
85``` cmake
86target_link_libraries(sample PUBLIC libnative_media_codecbase.so)
87target_link_libraries(sample PUBLIC libnative_media_core.so)
88target_link_libraries(sample PUBLIC libnative_media_vdec.so)
89```
90
91> **NOTE**
92>
93> The word **sample** in the preceding code snippet is only an example. Use the actual project directory name.
94>
95
96### Defining the Basic Structure
97
98The sample code provided in this section adheres to the C++17 standard and is for reference only. You can define your own buffer objects by referring to it.
99
1001. Add the header files.
101
102    ```c++
103    #include <condition_variable>
104    #include <memory>
105    #include <mutex>
106    #include <queue>
107    #include <shared_mutex>
108    ```
109
1102. Define the information about the decoder callback buffer.
111
112    ```c++
113    struct CodecBufferInfo {
114        CodecBufferInfo(uint32_t index, OH_AVBuffer *buffer): index(index), buffer(buffer), isValid(true) {}
115        // Callback buffer.
116        OH_AVBuffer *buffer = nullptr;
117        // Index of the callback buffer.
118        uint32_t index = 0;
119        // Check whether the current buffer information is valid.
120        bool isValid = true;
121    };
122    ```
123
1243. Define the input and output queue for decoding.
125
126    ```c++
127    class CodecBufferQueue {
128    public:
129        // Pass the callback buffer information to the queue.
130        void Enqueue(const std::shared_ptr<CodecBufferInfo> bufferInfo)
131        {
132            std::unique_lock<std::mutex> lock(mutex_);
133            bufferQueue_.push(bufferInfo);
134            cond_.notify_all();
135        }
136
137        // Obtain the information about the callback buffer.
138        std::shared_ptr<CodecBufferInfo> Dequeue(int32_t timeoutMs = 1000)
139        {
140            std::unique_lock<std::mutex> lock(mutex_);
141            (void)cond_.wait_for(lock, std::chrono::milliseconds(timeoutMs), [this]() { return !bufferQueue_.empty(); });
142            if (bufferQueue_.empty()) {
143                return nullptr;
144            }
145            std::shared_ptr<CodecBufferInfo> bufferInfo = bufferQueue_.front();
146            bufferQueue_.pop();
147            return bufferInfo;
148        }
149
150        // Clear the queue. The previous callback buffer becomes unavailable.
151        void Flush()
152        {
153            std::unique_lock<std::mutex> lock(mutex_);
154            while (!bufferQueue_.empty()) {
155                std::shared_ptr<CodecBufferInfo> bufferInfo = bufferQueue_.front();
156                // After the flush, stop, reset, and destroy operations are performed, the previous callback buffer information is invalid.
157                bufferInfo->isValid = false;
158                bufferQueue_.pop();
159            }
160        }
161
162    private:
163        std::mutex mutex_;
164        std::condition_variable cond_;
165        std::queue<std::shared_ptr<CodecBufferInfo>> bufferQueue_;
166    };
167    ```
168
1694. Configure global variables.
170
171    These global variables are for reference only. They can be encapsulated into an object based on service requirements.
172
173    ```c++
174    // Video frame width.
175    int32_t width = 320;
176    // Video frame height.
177    int32_t height = 240;
178    // Video pixel format.
179     OH_AVPixelFormat pixelFormat = AV_PIXEL_FORMAT_NV12;
180    // Video width stride.
181    int32_t widthStride = 0;
182    // Video height stride.
183    int32_t heightStride = 0;
184    // Pointer to the decoder instance.
185    OH_AVCodec *videoDec = nullptr;
186    // Decoder synchronization lock.
187    std::shared_mutex codecMutex;
188    // Decoder input queue.
189    CodecBufferQueue inQueue;
190    // Decoder output queue.
191    CodecBufferQueue outQueue;
192    ```
193
194### Surface Output
195
196The following walks you through how to implement the entire video decoding process in surface mode. In this example, an H.264 stream file is input, decoded, and rendered.
197
198Currently, the VideoDecoder module supports only data rotation in asynchronous mode.
199
2001. Add the header files.
201
202    ```c++
203    #include <multimedia/player_framework/native_avcodec_videodecoder.h>
204    #include <multimedia/player_framework/native_avcapability.h>
205    #include <multimedia/player_framework/native_avcodec_base.h>
206    #include <multimedia/player_framework/native_avformat.h>
207    #include <multimedia/player_framework/native_avbuffer.h>
208    #include <fstream>
209    ```
210
2112. Create a decoder instance.
212
213    You can create a decoder by name or MIME type. In the code snippet below, the following variables are used:
214
215    - **videoDec**: pointer to the video decoder instance.
216    - **capability**: pointer to the decoder's capability.
217    - **OH_AVCODEC_MIMETYPE_VIDEO_AVC**: AVC video codec.
218
219    ```c++
220    // To create a decoder by name, call OH_AVCapability_GetName to obtain the codec names available and then call OH_VideoDecoder_CreateByName. If your application has special requirements, for example, expecting a decoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first.
221    OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false);
222    // Create hardware decoder instances.
223    OH_AVCapability *capability= OH_AVCodec_GetCapabilityByCategory(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false, HARDWARE);
224    const char *name = OH_AVCapability_GetName(capability);
225    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByName(name);
226    ```
227
228    ```c++
229    // Create a decoder by MIME type. Only specific codecs recommended by the system can be created in this way.
230    // If multiple codecs need to be created, create hardware decoder instances first. If the hardware resources are insufficient, create software decoder instances.
231    // Create an H.264 decoder for software/hardware decoding.
232    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC);
233    // Create an H.265 decoder for software/hardware decoding.
234    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_HEVC);
235    ```
236
2373. Call **OH_VideoDecoder_RegisterCallback()** to register the callback functions.
238
239    Register the **OH_AVCodecCallback** struct that defines the following callback function pointers:
240
241    - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror).
242    - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, stream width or height change.
243    - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required, which means that the decoder is ready for receiving data.
244    - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that decoding is complete. (Note: The **buffer** parameter in surface mode is null.)
245
246    You need to process the callback functions to ensure that the decoder runs properly.
247
248    <!--RP2--><!--RP2End-->
249
250    ```c++
251    // Implement the OH_AVCodecOnError callback function.
252    static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData)
253    {
254        // Process the error code in the callback.
255        (void)codec;
256        (void)errorCode;
257        (void)userData;
258    }
259
260    // Implement the OH_AVCodecOnStreamChanged callback function.
261    static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData)
262    {
263        // The changed video width and height can be obtained through format.
264        (void)codec;
265        (void)userData;
266        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_WIDTH, &width);
267        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_HEIGHT, &height);
268    }
269
270    // Implement the OH_AVCodecOnNeedInputBuffer callback function.
271    static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
272    {
273        // The data buffer of the input frame and its index are sent to inQueue.
274        (void)codec;
275        (void)userData;
276        inQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
277    }
278
279    // Implement the OH_AVCodecOnNewOutputBuffer callback function.
280    static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
281    {
282        // The data buffer of the finished frame and its index are sent to outQueue.
283        (void)codec;
284        (void)userData;
285        outQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
286    }
287    // Call OH_VideoDecoder_RegisterCallback() to register the callback functions.
288    OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer};
289    // Set the asynchronous callbacks.
290    int32_t ret = OH_VideoDecoder_RegisterCallback(videoDec, cb, nullptr); // nullptr: userData is null.
291    if (ret != AV_ERR_OK) {
292        // Handle exceptions.
293    }
294    ```
295
296    > **NOTE**
297    >
298    > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue.
299    >
300    > During video playback, if the SPS of the video stream contains color information, the decoder will return the information (RangeFlag, ColorPrimary, MatrixCoefficient, and TransferCharacteristic) through the **OH_AVFormat** parameter in the **OH_AVCodecOnStreamChanged** callback.
301    >
302    > In surface mode of video decoding, the internal data is processed by using High Efficiency Bandwidth Compression (HEBC) by default, and the values of **widthStride** and **heightStride** cannot be obtained.
303
3044. (Optional) Call **OH_VideoDecoder_SetDecryptionConfig** to set the decryption configuration. Call this API after the media key system information is obtained and a media key is obtained but before **Prepare()** is called. For details about how to obtain such information, see step 4 in [Media Data Demultiplexing](audio-video-demuxer.md).  In surface mode, the DRM decryption capability supports both secure and non-secure video channels. For details about DRM APIs, see [DRM](../../reference/apis-drm-kit/_drm.md).
305
306    Add the header files.
307
308    ```c++
309    #include <multimedia/drm_framework/native_mediakeysystem.h>
310    #include <multimedia/drm_framework/native_mediakeysession.h>
311    #include <multimedia/drm_framework/native_drm_err.h>
312    #include <multimedia/drm_framework/native_drm_common.h>
313    ```
314
315    Link the dynamic library in the CMake script.
316
317    ``` cmake
318    target_link_libraries(sample PUBLIC libnative_drm.so)
319    ```
320
321    <!--RP4-->The following is the sample code:<!--RP4End-->
322
323    ```c++
324    // Create a media key system based on the media key system information. The following uses com.clearplay.drm as an example.
325    MediaKeySystem *system = nullptr;
326    int32_t ret = OH_MediaKeySystem_Create("com.clearplay.drm", &system);
327    if (system == nullptr) {
328        printf("create media key system failed");
329        return;
330    }
331
332    // Create a decryption session. If a secure video channel is used, create a MediaKeySession with the content protection level of CONTENT_PROTECTION_LEVEL_HW_CRYPTO or higher.
333    // To use a non-secure video channel, create a MediaKeySession with the content protection level of CONTENT_PROTECTION_LEVEL_SW_CRYPTO or higher.
334    MediaKeySession *session = nullptr;
335    DRM_ContentProtectionLevel contentProtectionLevel = CONTENT_PROTECTION_LEVEL_SW_CRYPTO;
336    ret = OH_MediaKeySystem_CreateMediaKeySession(system, &contentProtectionLevel, &session);
337    if (ret != DRM_OK) {
338        // If the creation fails, refer to the DRM interface document and check logs.
339        printf("create media key session failed.");
340        return;
341    }
342    if (session == nullptr) {
343        printf("media key session is nullptr.");
344        return;
345    }
346
347    // Generate a media key request and set the response to the media key request.
348
349    // Set the decryption configuration, that is, set the decryption session and secure video channel flag to the decoder.
350    // If the DRM scheme supports a secure video channel, set secureVideoPath to true and create a secure decoder before using the channel.
351    // That is, in step 2, call OH_VideoDecoder_CreateByName, with a decoder name followed by .secure (for example, [CodecName].secure) passed in, to create a secure decoder.
352    bool secureVideoPath = false;
353    ret = OH_VideoDecoder_SetDecryptionConfig(videoDec, session, secureVideoPath);
354    ```
355
3565. Call **OH_VideoDecoder_Configure()** to configure the decoder.
357
358    For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs).
359
360    For details about the parameter verification rules, see [OH_VideoDecoder_Configure()](../../reference/apis-avcodec-kit/_video_decoder.md#oh_videodecoder_configure).
361
362    The parameter value ranges can be obtained through the capability query interface. For details, see [Obtaining Supported Codecs](obtain-supported-codecs.md).
363
364    Currently, the following options must be configured for all supported formats: video frame width, video frame height, and video pixel format.
365
366    ```c++
367
368    OH_AVFormat *format = OH_AVFormat_Create();
369    // Set the format.
370    OH_AVFormat_SetIntValue (format, OH_MD_KEY_WIDTH, width); // Mandatory.
371    OH_AVFormat_SetIntValue(format, OH_MD_KEY_HEIGHT, height); // Mandatory.
372    OH_AVFormat_SetIntValue(format, OH_MD_KEY_PIXEL_FORMAT, pixelFormat);
373    // (Optional) Configure low-latency decoding.
374    // If supported by the platform, the video decoder outputs frames in the decoding sequence when OH_MD_KEY_VIDEO_ENABLE_LOW_LATENCY is enabled.
375    OH_AVFormat_SetIntValue(format, OH_MD_KEY_VIDEO_ENABLE_LOW_LATENCY, 1);
376    // Configure the decoder.
377    int32_t ret = OH_VideoDecoder_Configure(videoDec, format);
378    if (ret != AV_ERR_OK) {
379        // Handle exceptions.
380    }
381    OH_AVFormat_Destroy(format);
382    ```
383
3846. Set the surface.
385
386    You can obtain NativeWindow in either of the following ways:
387
388    6.1 If the image is directly displayed after being decoded, obtain NativeWindow from the **XComponent**.
389
390    Add the header files.
391
392    ```c++
393    #include <native_window/external_window.h>
394    ```
395
396    Link the dynamic library in the CMake script.
397
398    ``` cmake
399    target_link_libraries(sample PUBLIC libnative_window.so)
400    ```
401
402    6.1.1 On ArkTS, call **getXComponentSurfaceId** of the xComponentController to obtain the surface ID of the XComponent. For details about the operation, see [Custom Rendering (XComponent)](../../ui/napi-xcomponent-guidelines.md#arkts-xcomponent-scenario).
403
404    6.1.2 On the native side, call **OH_NativeWindow_CreateNativeWindowFromSurfaceId** to create a **NativeWindow** instance.
405
406    ```c++
407    OHNativeWindow* nativeWindow;
408    // Create a NativeWindow instance based on the surface ID obtained in step 1.1.
409    OH_NativeWindow_CreateNativeWindowFromSurfaceId(surfaceId, nativeWindow);
410    ```
411
412    6.2 If OpenGL post-processing is performed after decoding, obtain NativeWindow from NativeImage. For details about the operation, see [NativeImage](../../graphics/native-image-guidelines.md).
413
414    You perform this step during decoding, that is, dynamically switch the surface.
415
416    ```c++
417    // Set the surface.
418    // Set the window parameters.
419    int32_t ret = OH_VideoDecoder_SetSurface(videoDec, nativeWindow); // Obtain a NativeWindow instance using either of the preceding methods.
420    if (ret != AV_ERR_OK) {
421        // Handle exceptions.
422    }
423    // Configure the matching mode between the video and screen. (Scale the buffer at the original aspect ratio to enable the smaller side of the buffer to match the window, while making the excess part transparent.)
424    OH_NativeWindow_NativeWindowSetScalingModeV2(nativeWindow, OH_SCALING_MODE_SCALE_CROP_V2);
425    ```
426
4277. (Optional) Call **OH_VideoDecoder_SetParameter()** to set the surface parameters of the decoder.
428    For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs).
429
430    ```c++
431    OH_AVFormat *format = OH_AVFormat_Create();
432    // Configure the display rotation angle.
433    OH_AVFormat_SetIntValue(format, OH_MD_KEY_ROTATION, 90);
434    int32_t ret = OH_VideoDecoder_SetParameter(videoDec, format);
435    OH_AVFormat_Destroy(format);
436    ```
437
4388. Call **OH_VideoDecoder_Prepare()** to prepare internal resources for the decoder.
439
440    ```c++
441    ret = OH_VideoDecoder_Prepare(videoDec);
442    if (ret != AV_ERR_OK) {
443        // Handle exceptions.
444    }
445    ```
446
4479. Call **OH_VideoDecoder_Start()** to start the decoder.
448
449    ```c++
450    // Start the decoder.
451    int32_t ret = OH_VideoDecoder_Start(videoDec);
452    if (ret != AV_ERR_OK) {
453        // Handle exceptions.
454    }
455    ```
456
45710. (Optional) Call **OH_AVCencInfo_SetAVBuffer()** to set the Common Encryption Scheme (CENC) information.
458
459    If the program to play is DRM encrypted and the application implements media demultiplexing instead of using the system's [demuxer](audio-video-demuxer.md), you must call **OH_AVCencInfo_SetAVBuffer()** to set the CENC information to the AVBuffer. In this way, the AVBuffer carries the data to be decrypted and CENC information, so that the media data in the AVBuffer can be decrypted. You do not need to call this API when the application uses the system's [demuxer](audio-video-demuxer.md).
460
461    Add the header files.
462
463    ```c++
464    #include <multimedia/player_framework/native_cencinfo.h>
465    ```
466
467    Link the dynamic library in the CMake script.
468
469    ``` cmake
470    target_link_libraries(sample PUBLIC libnative_media_avcencinfo.so)
471    ```
472
473    In the code snippet below, the following variable is used:
474    - **buffer**: parameter passed by the callback function **OnNeedInputBuffer**.
475    ```c++
476    uint32_t keyIdLen = DRM_KEY_ID_SIZE;
477    uint8_t keyId[] = {
478        0xd4, 0xb2, 0x01, 0xe4, 0x61, 0xc8, 0x98, 0x96,
479        0xcf, 0x05, 0x22, 0x39, 0x8d, 0x09, 0xe6, 0x28};
480    uint32_t ivLen = DRM_KEY_IV_SIZE;
481    uint8_t iv[] = {
482        0xbf, 0x77, 0xed, 0x51, 0x81, 0xde, 0x36, 0x3e,
483        0x52, 0xf7, 0x20, 0x4f, 0x72, 0x14, 0xa3, 0x95};
484    uint32_t encryptedBlockCount = 0;
485    uint32_t skippedBlockCount = 0;
486    uint32_t firstEncryptedOffset = 0;
487    uint32_t subsampleCount = 1;
488    DrmSubsample subsamples[1] = { {0x10, 0x16} };
489    // Create a CencInfo instance.
490    OH_AVCencInfo *cencInfo = OH_AVCencInfo_Create();
491    if (cencInfo == nullptr) {
492        // Handle exceptions.
493    }
494    // Set the decryption algorithm.
495    OH_AVErrCode errNo = OH_AVCencInfo_SetAlgorithm(cencInfo, DRM_ALG_CENC_AES_CTR);
496    if (errNo != AV_ERR_OK) {
497        // Handle exceptions.
498    }
499    // Set KeyId and Iv.
500    errNo = OH_AVCencInfo_SetKeyIdAndIv(cencInfo, keyId, keyIdLen, iv, ivLen);
501    if (errNo != AV_ERR_OK) {
502        // Handle exceptions.
503    }
504    // Set the sample information.
505    errNo = OH_AVCencInfo_SetSubsampleInfo(cencInfo, encryptedBlockCount, skippedBlockCount, firstEncryptedOffset,
506        subsampleCount, subsamples);
507    if (errNo != AV_ERR_OK) {
508        // Handle exceptions.
509    }
510    // Set the mode. KeyId, Iv, and SubSamples have been set.
511    errNo = OH_AVCencInfo_SetMode(cencInfo, DRM_CENC_INFO_KEY_IV_SUBSAMPLES_SET);
512    if (errNo != AV_ERR_OK) {
513        // Handle exceptions.
514    }
515    // Set CencInfo to the AVBuffer.
516    errNo = OH_AVCencInfo_SetAVBuffer(cencInfo, buffer);
517    if (errNo != AV_ERR_OK) {
518        // Handle exceptions.
519    }
520    // Destroy the CencInfo instance.
521    errNo = OH_AVCencInfo_Destroy(cencInfo);
522    if (errNo != AV_ERR_OK) {
523        // Handle exceptions.
524    }
525    ```
526
52711. Call **OH_VideoDecoder_PushInputBuffer()** to push the stream to the input buffer for decoding.
528
529    In the code snippet below, the following variables are used:
530
531    - **buffer**: parameter passed by the callback function **OnNeedInputBuffer**. You can obtain the virtual address of the input stream by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr).
532    - **index**: parameter passed by the callback function **OnNeedInputBuffer**, which uniquely corresponds to the buffer.
533    - **size**, **offset**, **pts**, and **frameData**: size, offset, timestamp, and frame data. For details about how to obtain such information, see step 9 in [Media Data Demultiplexing](./audio-video-demuxer.md).
534    - **flags**: type of the buffer flag. For details, see [OH_AVCodecBufferFlags](../../reference/apis-avcodec-kit/_core.md#oh_avcodecbufferflags).
535
536    ```c++
537    std::shared_ptr<CodecBufferInfo> bufferInfo = inQueue.Dequeue();
538    std::shared_lock<std::shared_mutex> lock(codecMutex);
539    if (bufferInfo == nullptr || !bufferInfo->isValid) {
540        // Handle exceptions.
541    }
542    // Write stream data.
543    uint8_t *addr = OH_AVBuffer_GetAddr(bufferInfo->buffer);
544    int32_t capcacity = OH_AVBuffer_GetCapacity(bufferInfo->buffer);
545    if (size > capcacity) {
546        // Handle exceptions.
547    }
548    memcpy(addr, frameData, size);
549    // Configure the size, offset, and timestamp of the frame data.
550    OH_AVCodecBufferAttr info;
551    info.size = size;
552    info.offset = offset;
553    info.pts = pts;
554    info.flags = flags;
555    // Write the information to the buffer.
556    int32_t ret = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info);
557    if (ret != AV_ERR_OK) {
558        // Handle exceptions.
559    }
560    // Send the data to the input buffer for decoding. index is the index of the buffer.
561    ret = OH_VideoDecoder_PushInputBuffer(videoDec, bufferInfo->index);
562    if (ret != AV_ERR_OK) {
563        // Handle exceptions.
564    }
565    ```
566
56712. Call **OH_VideoDecoder_RenderOutputBuffer()** or **OH_VideoDecoder_RenderOutputBufferAtTime()** to render the data and free the output buffer, or call **OH_VideoDecoder_FreeOutputBuffer()** to directly free the output buffer.
568
569    In the code snippet below, the following variables are used:
570
571    - **index**: parameter passed by the callback function **OnNewOutputBuffer**, which uniquely corresponds to the buffer.
572    - **buffer**: parameter passed by the callback function **OnNewOutputBuffer**. In surface mode, you cannot obtain the virtual address of the image by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr).
573
574    ```c++
575    std::shared_ptr<CodecBufferInfo> bufferInfo = outQueue.Dequeue();
576    std::shared_lock<std::shared_mutex> lock(codecMutex);
577    if (bufferInfo == nullptr || !bufferInfo->isValid) {
578        // Handle exceptions.
579    }
580    // Obtain the decoded information.
581    OH_AVCodecBufferAttr info;
582    int32_t ret = OH_AVBuffer_GetBufferAttr(bufferInfo->buffer, &info);
583    if (ret != AV_ERR_OK) {
584        // Handle exceptions.
585    }
586    // You can determine the value.
587    bool isRender;
588    bool isNeedRenderAtTime;
589    if (isRender) {
590        // Render the data and free the output buffer. index is the index of the buffer.
591        if (isNeedRenderAtTime){
592            // Obtain the system absolute time, and call renderTimestamp to display the time based on service requirements.
593            int64_t renderTimestamp =
594                std::chrono::duration_cast<std::chrono::nanoseconds>(std::chrono::high_resolution_clock::now().time_since_epoch()).count();
595            ret = OH_VideoDecoder_RenderOutputBufferAtTime(videoDec, bufferInfo->index, renderTimestamp);
596        } else {
597           ret = OH_VideoDecoder_RenderOutputBuffer(videoDec, bufferInfo->index);
598        }
599
600    } else {
601        // Free the output buffer.
602        ret = OH_VideoDecoder_FreeOutputBuffer(videoDec, bufferInfo->index);
603    }
604    if (ret != AV_ERR_OK) {
605        // Handle exceptions.
606    }
607
608    ```
609
610    > **NOTE**
611    >
612    > To obtain the buffer attributes, such as **pixel_format** and **stride**, call [OH_NativeWindow_NativeWindowHandleOpt](../../reference/apis-arkgraphics2d/_native_window.md#oh_nativewindow_nativewindowhandleopt).
613
61413. (Optional) Call **OH_VideoDecoder_Flush()** to refresh the decoder.
615
616    After **OH_VideoDecoder_Flush** is called, the decoder remains in the Running state, but the input and output data and parameter set (such as the H.264 PPS/SPS) buffered in the decoder are cleared.
617
618    To continue decoding, you must call **OH_VideoDecoder_Start** again.
619
620    In the code snippet below, the following variables are used:
621
622    - **xpsData** and **xpsSize**: PPS/SPS information. For details about how to obtain such information, see [Media Data Demultiplexing](./audio-video-demuxer.md).
623
624    ```c++
625    std::unique_lock<std::shared_mutex> lock(codecMutex);
626    // Refresh the decoder.
627    int32_t ret = OH_VideoDecoder_Flush(videoDec);
628    if (ret != AV_ERR_OK) {
629        // Handle exceptions.
630    }
631    inQueue.Flush();
632    outQueue.Flush();
633    // Start decoding again.
634    ret = OH_VideoDecoder_Start(videoDec);
635    if (ret != AV_ERR_OK) {
636        // Handle exceptions.
637    }
638
639    std::shared_ptr<CodecBufferInfo> bufferInfo = outQueue.Dequeue();
640    if (bufferInfo == nullptr || !bufferInfo->isValid) {
641        // Handle exceptions.
642    }
643    // Retransfer PPS/SPS.
644    // Configure the frame PPS/SPS information.
645    uint8_t *addr = OH_AVBuffer_GetAddr(bufferInfo->buffer);
646    int32_t capcacity = OH_AVBuffer_GetCapacity(bufferInfo->buffer);
647    if (xpsSize > capcacity) {
648        // Handle exceptions.
649    }
650    memcpy(addr, xpsData, xpsSize);
651    OH_AVCodecBufferAttr info;
652    info.flags = AVCODEC_BUFFER_FLAG_CODEC_DATA;
653    // Write the information to the buffer.
654    ret = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info);
655    if (ret != AV_ERR_OK) {
656        // Handle exceptions.
657    }
658    // Push the frame data to the decoder. index is the index of the corresponding queue.
659    ret = OH_VideoDecoder_PushInputBuffer(videoDec, bufferInfo->index);
660    if (ret != AV_ERR_OK) {
661        // Handle exceptions.
662    }
663
664    ```
665
666    > **NOTE**
667    >
668    > When **OH_VideoDecoder_Start** s called again after the flush operation, the PPS/SPS must be retransferred.
669
670
67114. (Optional) Call **OH_VideoDecoder_Reset()** to reset the decoder.
672
673    After **OH_VideoDecoder_Reset** is called, the decoder returns to the Initialized state. To continue decoding, you must call **OH_VideoDecoder_Configure**, **OH_VideoDecoder_SetSurface**, and **OH_VideoDecoder_Prepare** in sequence.
674
675    ```c++
676    std::unique_lock<std::shared_mutex> lock(codecMutex);
677    // Reset the decoder.
678    int32_t ret = OH_VideoDecoder_Reset(videoDec);
679    if (ret != AV_ERR_OK) {
680        // Handle exceptions.
681    }
682    inQueue.Flush();
683    outQueue.Flush();
684    // Reconfigure the decoder.
685    ret = OH_VideoDecoder_Configure(videoDec, format);
686    if (ret != AV_ERR_OK) {
687        // Handle exceptions.
688    }
689    // Reconfigure the surface in surface mode. This is not required in buffer mode.
690    ret = OH_VideoDecoder_SetSurface(videoDec, nativeWindow);
691    if (ret != AV_ERR_OK) {
692        // Handle exceptions.
693    }
694    // The decoder is ready again.
695    ret = OH_VideoDecoder_Prepare(videoDec);
696    if (ret != AV_ERR_OK) {
697        // Handle exceptions.
698    }
699    ```
700
70115. (Optional) Call **OH_VideoDecoder_Stop()** to stop the decoder.
702
703    After **OH_VideoDecoder_Stop()** is called, the decoder retains the decoding instance and releases the input and output buffers. You can directly call **OH_VideoDecoder_Start** to continue decoding. The first input buffer must carry the parameter set, starting from the IDR frame.
704
705    ```c++
706    std::unique_lock<std::shared_mutex> lock(codecMutex);
707    // Stop the decoder.
708    int32_t ret = OH_VideoDecoder_Stop(videoDec);
709    if (ret != AV_ERR_OK) {
710        // Handle exceptions.
711    }
712    inQueue.Flush();
713    outQueue.Flush();
714    ```
715
71616. Call **OH_VideoDecoder_Destroy()** to destroy the decoder instance and release resources.
717
718    > **NOTE**
719    >
720    > This API cannot be called in the callback function.
721    >
722    > After the call, you must set a null pointer to the decoder to prevent program errors caused by wild pointers.
723
724    ```c++
725    std::unique_lock<std::shared_mutex> lock(codecMutex);
726    // Call OH_VideoDecoder_Destroy to destroy the decoder.
727    int32_t ret = AV_ERR_OK;
728    if (videoDec != nullptr) {
729        ret = OH_VideoDecoder_Destroy(videoDec);
730        videoDec = nullptr;
731    }
732    if (ret != AV_ERR_OK) {
733        // Handle exceptions.
734    }
735    inQueue.Flush();
736    outQueue.Flush();
737    ```
738
739### Buffer Output
740
741The following walks you through how to implement the entire video decoding process in buffer mode. In this example, an H.264 file is input and decoded into a YUV file.
742Currently, the VideoDecoder module supports only data rotation in asynchronous mode.
743
7441. Add the header files.
745
746    ```c++
747    #include <multimedia/player_framework/native_avcodec_videodecoder.h>
748    #include <multimedia/player_framework/native_avcapability.h>
749    #include <multimedia/player_framework/native_avcodec_base.h>
750    #include <multimedia/player_framework/native_avformat.h>
751    #include <multimedia/player_framework/native_avbuffer.h>
752    #include <native_buffer/native_buffer.h>
753    #include <fstream>
754    ```
755
7562. Create a decoder instance.
757
758    The procedure is the same as that in surface mode and is not described here.
759
760    ```c++
761    // To create a decoder by name, call OH_AVCapability_GetName to obtain the codec names available and then call OH_VideoDecoder_CreateByName. If your application has special requirements, for example, expecting a decoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first.
762    OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false);
763    const char *name = OH_AVCapability_GetName(capability);
764    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByName(name);
765    ```
766
767    ```c++
768    // Create a decoder by MIME type. Only specific codecs recommended by the system can be created in this way.
769    // If multiple codecs need to be created, create hardware decoder instances first. If the hardware resources are insufficient, create software decoder instances.
770    // Create an H.264 decoder for software/hardware decoding.
771    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC);
772    // Create an H.265 decoder for hardware decoding.
773    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_HEVC);
774    ```
775
7763. Call **OH_VideoDecoder_RegisterCallback()** to register the callback functions.
777
778    Register the **OH_AVCodecCallback** struct that defines the following callback function pointers:
779
780    - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror).
781    - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, stream width or height change.
782    - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required, which means that the decoder is ready for receiving data.
783    - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that decoding is complete.
784
785    You need to process the callback functions to ensure that the decoder runs properly.
786
787    <!--RP2--><!--RP2End-->
788
789    ```c++
790    int32_t cropTop = 0;
791    int32_t cropBottom = 0;
792    int32_t cropLeft = 0;
793    int32_t cropRight = 0;
794    bool isFirstFrame = true;
795    // Implement the OH_AVCodecOnError callback function.
796    static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData)
797    {
798        // Process the error code in the callback.
799        (void)codec;
800        (void)errorCode;
801        (void)userData;
802    }
803
804    // Implement the OH_AVCodecOnStreamChanged callback function.
805    static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData)
806    {
807        // Optional. Configure the data when you want to obtain the video width, height, and stride.
808        // The changed video width, height, and stride can be obtained through format.
809        (void)codec;
810        (void)userData;
811        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_WIDTH, &width);
812        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_HEIGHT, &height);
813        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_STRIDE, &widthStride);
814        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_SLICE_HEIGHT, &heightStride);
815        // (Optional) Obtain the cropped rectangle information.
816        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_TOP, &cropTop);
817        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_BOTTOM, &cropBottom);
818        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_LEFT, &cropLeft);
819        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_RIGHT, &cropRight);
820    }
821
822    // Implement the OH_AVCodecOnNeedInputBuffer callback function.
823    static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
824    {
825        // The data buffer of the input frame and its index are sent to inQueue.
826        (void)codec;
827        (void)userData;
828        inQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
829    }
830
831    // Implement the OH_AVCodecOnNewOutputBuffer callback function.
832    static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
833    {
834        // Optional. Configure the data when you want to obtain the video width, height, and stride.
835        // Obtain the video width, height, and stride.
836        if (isFirstFrame) {
837            OH_AVFormat *format = OH_VideoDecoder_GetOutputDescription(codec);
838            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_WIDTH, &width);
839            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_HEIGHT, &height);
840            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_STRIDE, &widthStride);
841            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_SLICE_HEIGHT, &heightStride);
842            // (Optional) Obtain the cropped rectangle information.
843            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_TOP, &cropTop);
844            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_BOTTOM, &cropBottom);
845            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_LEFT, &cropLeft);
846            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_RIGHT, &cropRight);
847            OH_AVFormat_Destroy(format);
848            isFirstFrame = false;
849        }
850        // The data buffer of the finished frame and its index are sent to outQueue.
851        (void)userData;
852        outQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
853    }
854    // Call OH_VideoDecoder_RegisterCallback() to register the callback functions.
855    OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer};
856    // Set the asynchronous callbacks.
857    int32_t ret = OH_VideoDecoder_RegisterCallback(videoDec, cb, nullptr); // nullptr: userData is null.
858    if (ret != AV_ERR_OK) {
859        // Handle exceptions.
860    }
861    ```
862
863    > **NOTE**
864    >
865    > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue.
866    >
867
8684. (Optional) Call **OH_VideoDecoder_SetDecryptionConfig** to set the decryption configuration. Call this API after the media key system information is obtained and a media key is obtained but before **Prepare()** is called. For details about how to obtain such information, see step 4 in [Media Data Demultiplexing](audio-video-demuxer.md).  In buffer mode, the DRM decryption capability supports only non-secure video channels. For details about DRM APIs, see [DRM](../../reference/apis-drm-kit/_drm.md).
869
870    Add the header files.
871
872    ```c++
873    #include <multimedia/drm_framework/native_mediakeysystem.h>
874    #include <multimedia/drm_framework/native_mediakeysession.h>
875    #include <multimedia/drm_framework/native_drm_err.h>
876    #include <multimedia/drm_framework/native_drm_common.h>
877    ```
878
879    Link the dynamic library in the CMake script.
880
881    ``` cmake
882    target_link_libraries(sample PUBLIC libnative_drm.so)
883    ```
884
885    The following is the sample code:
886
887    ```c++
888    // Create a media key system based on the media key system information. The following uses com.clearplay.drm as an example.
889    MediaKeySystem *system = nullptr;
890    int32_t ret = OH_MediaKeySystem_Create("com.clearplay.drm", &system);
891    if (system == nullptr) {
892        printf("create media key system failed");
893        return;
894    }
895
896    // Create a media key session.
897    // To use a non-secure video channel, create a MediaKeySession with the content protection level of CONTENT_PROTECTION_LEVEL_SW_CRYPTO or higher.
898    MediaKeySession *session = nullptr;
899    DRM_ContentProtectionLevel contentProtectionLevel = CONTENT_PROTECTION_LEVEL_SW_CRYPTO;
900    ret = OH_MediaKeySystem_CreateMediaKeySession(system, &contentProtectionLevel, &session);
901    if (ret != DRM_OK) {
902        // If the creation fails, refer to the DRM interface document and check logs.
903        printf("create media key session failed.");
904        return;
905    }
906    if (session == nullptr) {
907        printf("media key session is nullptr.");
908        return;
909    }
910    // Generate a media key request and set the response to the media key request.
911    // Set the decryption configuration, that is, set the decryption session and secure video channel flag to the decoder.
912    bool secureVideoPath = false;
913    ret = OH_VideoDecoder_SetDecryptionConfig(videoDec, session, secureVideoPath);
914    ```
915
9165. Call **OH_VideoDecoder_Configure()** to configure the decoder.
917
918    The procedure is the same as that in surface mode and is not described here.
919
920    ```c++
921    OH_AVFormat *format = OH_AVFormat_Create();
922    // Set the format.
923    OH_AVFormat_SetIntValue (format, OH_MD_KEY_WIDTH, width); // Mandatory.
924    OH_AVFormat_SetIntValue(format, OH_MD_KEY_HEIGHT, height); // Mandatory.
925    OH_AVFormat_SetIntValue(format, OH_MD_KEY_PIXEL_FORMAT, pixelFormat);
926    // Configure the decoder.
927    int32_t ret = OH_VideoDecoder_Configure(videoDec, format);
928    if (ret != AV_ERR_OK) {
929        // Handle exceptions.
930    }
931    OH_AVFormat_Destroy(format);
932    ```
933
9346. Call **OH_VideoDecoder_Prepare()** to prepare internal resources for the decoder.
935
936    ```c++
937    int32_t ret = OH_VideoDecoder_Prepare(videoDec);
938    if (ret != AV_ERR_OK) {
939        // Handle exceptions.
940    }
941    ```
942
9437. Call **OH_VideoDecoder_Start()** to start the decoder.
944
945    ```c++
946    std::unique_ptr<std::ofstream> outputFile = std::make_unique<std::ofstream>();
947    outputFile->open("/*yourpath*.yuv", std::ios::out | std::ios::binary | std::ios::ate);
948    // Start the decoder.
949    int32_t ret = OH_VideoDecoder_Start(videoDec);
950    if (ret != AV_ERR_OK) {
951        // Handle exceptions.
952    }
953    ```
954
9558. (Optional) Call **OH_AVCencInfo_SetAVBuffer()** to set the CENC information.
956
957    The procedure is the same as that in surface mode and is not described here.
958
959    The following is the sample code:
960
961    ```c++
962    uint32_t keyIdLen = DRM_KEY_ID_SIZE;
963    uint8_t keyId[] = {
964        0xd4, 0xb2, 0x01, 0xe4, 0x61, 0xc8, 0x98, 0x96,
965        0xcf, 0x05, 0x22, 0x39, 0x8d, 0x09, 0xe6, 0x28};
966    uint32_t ivLen = DRM_KEY_IV_SIZE;
967    uint8_t iv[] = {
968        0xbf, 0x77, 0xed, 0x51, 0x81, 0xde, 0x36, 0x3e,
969        0x52, 0xf7, 0x20, 0x4f, 0x72, 0x14, 0xa3, 0x95};
970    uint32_t encryptedBlockCount = 0;
971    uint32_t skippedBlockCount = 0;
972    uint32_t firstEncryptedOffset = 0;
973    uint32_t subsampleCount = 1;
974    DrmSubsample subsamples[1] = { {0x10, 0x16} };
975    // Create a CencInfo instance.
976    OH_AVCencInfo *cencInfo = OH_AVCencInfo_Create();
977    if (cencInfo == nullptr) {
978        // Handle exceptions.
979    }
980    // Set the decryption algorithm.
981    OH_AVErrCode errNo = OH_AVCencInfo_SetAlgorithm(cencInfo, DRM_ALG_CENC_AES_CTR);
982    if (errNo != AV_ERR_OK) {
983        // Handle exceptions.
984    }
985    // Set KeyId and Iv.
986    errNo = OH_AVCencInfo_SetKeyIdAndIv(cencInfo, keyId, keyIdLen, iv, ivLen);
987    if (errNo != AV_ERR_OK) {
988        // Handle exceptions.
989    }
990    // Set the sample information.
991    errNo = OH_AVCencInfo_SetSubsampleInfo(cencInfo, encryptedBlockCount, skippedBlockCount, firstEncryptedOffset,
992        subsampleCount, subsamples);
993    if (errNo != AV_ERR_OK) {
994        // Handle exceptions.
995    }
996    // Set the mode. KeyId, Iv, and SubSamples have been set.
997    errNo = OH_AVCencInfo_SetMode(cencInfo, DRM_CENC_INFO_KEY_IV_SUBSAMPLES_SET);
998    if (errNo != AV_ERR_OK) {
999        // Handle exceptions.
1000    }
1001    // Set CencInfo to the AVBuffer.
1002    errNo = OH_AVCencInfo_SetAVBuffer(cencInfo, buffer);
1003    if (errNo != AV_ERR_OK) {
1004        // Handle exceptions.
1005    }
1006    // Destroy the CencInfo instance.
1007    errNo = OH_AVCencInfo_Destroy(cencInfo);
1008    if (errNo != AV_ERR_OK) {
1009        // Handle exceptions.
1010    }
1011    ```
1012
10139. Call **OH_VideoDecoder_PushInputBuffer()** to push the stream to the input buffer for decoding.
1014
1015    The procedure is the same as that in surface mode and is not described here.
1016
1017    ```c++
1018    std::shared_ptr<CodecBufferInfo> bufferInfo = inQueue.Dequeue();
1019    std::shared_lock<std::shared_mutex> lock(codecMutex);
1020    if (bufferInfo == nullptr || !bufferInfo->isValid) {
1021        // Handle exceptions.
1022    }
1023    // Write stream data.
1024    uint8_t *addr = OH_AVBuffer_GetAddr(bufferInfo->buffer);
1025    int32_t capcacity = OH_AVBuffer_GetCapacity(bufferInfo->buffer);
1026    if (size > capcacity) {
1027        // Handle exceptions.
1028    }
1029    memcpy(addr, frameData, size);
1030    // Configure the size, offset, and timestamp of the frame data.
1031    OH_AVCodecBufferAttr info;
1032    info.size = size;
1033    info.offset = offset;
1034    info.pts = pts;
1035    info.flags = flags;
1036    // Write the information to the buffer.
1037    ret = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info);
1038    if (ret != AV_ERR_OK) {
1039        // Handle exceptions.
1040    }
1041    // Send the data to the input buffer for decoding. index is the index of the buffer.
1042    int32_t ret = OH_VideoDecoder_PushInputBuffer(videoDec, bufferInfo->index);
1043    if (ret != AV_ERR_OK) {
1044        // Handle exceptions.
1045    }
1046    ```
1047
104810. Call **OH_VideoDecoder_FreeOutputBuffer()** to release decoded frames.
1049
1050    In the code snippet below, the following variables are used:
1051
1052    - **index**: parameter passed by the callback function **OnNewOutputBuffer**, which uniquely corresponds to the buffer.
1053    - **buffer**: parameter passed by the callback function **OnNewOutputBuffer**. You can obtain the virtual address of an image by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr).
1054
1055    ```c++
1056    std::shared_ptr<CodecBufferInfo> bufferInfo = outQueue.Dequeue();
1057    std::shared_lock<std::shared_mutex> lock(codecMutex);
1058    if (bufferInfo == nullptr || !bufferInfo->isValid) {
1059        // Handle exceptions.
1060    }
1061    // Obtain the decoded information.
1062    OH_AVCodecBufferAttr info;
1063    int32_t ret = OH_AVBuffer_GetBufferAttr(bufferInfo->buffer, &info);
1064    if (ret != AV_ERR_OK) {
1065        // Handle exceptions.
1066    }
1067    // Write the decoded data (specified by data) to the output file.
1068    outputFile->write(reinterpret_cast<char *>(OH_AVBuffer_GetAddr(bufferInfo->buffer)), info.size);
1069    // Free the buffer that stores the output data. index is the index of the buffer.
1070    ret = OH_VideoDecoder_FreeOutputBuffer(videoDec, bufferInfo->index);
1071    if (ret != AV_ERR_OK) {
1072        // Handle exceptions.
1073    }
1074    ```
1075
1076    To copy the Y, U, and V components of an NV12 or NV21 image to another buffer in sequence, perform the following steps (taking an NV12 image as an example),
1077
1078    presenting the image layout of **width**, **height**, **wStride**, and **hStride**.
1079
1080    - **OH_MD_KEY_VIDEO_PIC_WIDTH** corresponds to **width**.
1081    - **OH_MD_KEY_VIDEO_PIC_HEIGHT** corresponds to **height**.
1082    - **OH_MD_KEY_VIDEO_STRIDE** corresponds to **wStride**.
1083    - **OH_MD_KEY_VIDEO_SLICE_HEIGHT** corresponds to **hStride**.
1084
1085    ![copy by line](figures/copy-by-line-decoder.png)
1086
1087    Add the header files.
1088
1089    ```c++
1090    #include <string.h>
1091    ```
1092
1093    The following is the sample code:
1094
1095    ```c++
1096    // Obtain the width and height of the source buffer by using the callback function OnStreamChanged or OH_VideoDecoder_GetOutputDescription.
1097    struct Rect
1098    {
1099        int32_t width;
1100        int32_t height;
1101    };
1102
1103    struct DstRect // Width, height, and stride of the destination buffer. They are set by the caller.
1104    {
1105        int32_t wStride;
1106        int32_t hStride;
1107    };
1108    // Obtain the width, height, and stride of the source buffer by using the callback function OnStreamChanged or OH_VideoDecoder_GetOutputDescription.
1109    struct SrcRect
1110    {
1111        int32_t wStride;
1112        int32_t hStride;
1113    };
1114
1115    Rect rect = {320, 240};
1116    DstRect dstRect = {320, 240};
1117    SrcRect srcRect = {320, 256};
1118    uint8_t* dst = new uint8_t[dstRect.hStride * dstRect.wStride * 3 / 2]; // Pointer to the target memory area.
1119    uint8_t* src = new uint8_t[srcRect.hStride * srcRect.wStride * 3 / 2]; // Pointer to the source memory area.
1120    uint8_t* dstTemp = dst;
1121    uint8_t* srcTemp = src;
1122
1123    // Y: Copy the source data in the Y region to the target data in another region.
1124    for (int32_t i = 0; i < rect.height; ++i) {
1125        // Copy a row of data from the source to a row of the target.
1126        memcpy(dstTemp, srcTemp, rect.width);
1127        // Update the pointers to the source data and target data to copy the next row. The pointers to the source data and target data are moved downwards by one wStride each time the source data and target data are updated.
1128        dstTemp += dstRect.wStride;
1129        srcTemp += srcRect.wStride;
1130    }
1131    // Padding.
1132    // Update the pointers to the source data and target data. The pointers move downwards by one padding.
1133    dstTemp += (dstRect.hStride - rect.height) * dstRect.wStride;
1134    srcTemp += (srcRect.hStride - rect.height) * srcRect.wStride;
1135    rect.height >>= 1;
1136    // UV: Copy the source data in the UV region to the target data in another region.
1137    for (int32_t i = 0; i < rect.height; ++i) {
1138        memcpy(dstTemp, srcTemp, rect.width);
1139        dstTemp += dstRect.wStride;
1140        srcTemp += srcRect.wStride;
1141    }
1142
1143    delete[] dst;
1144    dst = nullptr;
1145    delete[] src;
1146    src = nullptr;
1147    ```
1148
1149    When processing buffer data (before releasing data) during hardware decoding, the output callback AVBuffer receives the image data after width and height alignment. Generally, copy the image width, height, stride, and pixel format to ensure correct processing of the decoded data. For details, see step 3 in [Buffer Output](#buffer-output).
1150
1151The subsequent processes (including refreshing, resetting, stopping, and destroying the decoder) are basically the same as those in surface mode. For details, see steps 13-16 in [Surface Output](#surface-output).
1152
1153<!--RP5-->
1154<!--RP5End-->
1155