• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Video Decoding
2
3You can call the native APIs provided by the VideoDecoder module to decode video, that is, to decode media data into a YUV file or render it.
4
5<!--RP3--><!--RP3End-->
6
7Currently, the following decoding capabilities are supported:
8
9| Video Hardware Decoding Type      | Video Software Decoding Type  |
10| --------------------- | ---------------- |
11| AVC (H.264) and HEVC (H.265)|AVC (H.264) and HEVC (H.265)|
12
13Video software decoding and hardware decoding are different. When a decoder is created based on the MIME type, only H.264 (OH_AVCODEC_MIMETYPE_VIDEO_AVC) is supported for software decoding, and H.264 (OH_AVCODEC_MIMETYPE_VIDEO_AVC) and H.265 (OH_AVCODEC_MIMETYPE_VIDEO_HEVC) are supported for hardware decoding.
14
15You can perform a [capability query](obtain-supported-codecs.md) to obtain the decoding capability range.
16
17<!--RP1--><!--RP1End-->
18
19Through the VideoDecoder module, your application can implement the following key capabilities.
20
21|          Capability                      |             How to Configure                                                                    |
22| --------------------------------------- | ---------------------------------------------------------------------------------- |
23| Variable resolution        | The decoder supports the change of the input stream resolution. After the resolution is changed, the callback function **OnStreamChanged()** set by **OH_VideoDecoder_RegisterCallback** is triggered. For details, see step 4 in surface mode or step 3 in buffer mode. |
24| Dynamic surface switching | Call **OH_VideoDecoder_SetSurface** to configure this capability. It is supported only in surface mode. For details, see step 7 in surface mode.   |
25| Low-latency decoding | Call **OH_VideoDecoder_Configure** to configure this capability. For details, see step 6 in surface mode or step 5 in buffer mode.     |
26
27
28## Restrictions
29- The buffer mode does not support 10-bit image data.
30- After **flush()**, **reset()**, or **stop()** is called, the PPS/SPS must be transferred again in the **start()** call. For details about the example, see step 14 in [Surface Output](#surface-output).
31- Due to limited hardware decoder resources, you must call **OH_VideoDecoder_Destroy** to destroy every decoder instance when it is no longer needed.
32- The input streams for video decoding support only the AnnexB format, and the supported AnnexB format supports multiple slices. However, the slices of the same frame must be sent to the decoder at a time.
33- When **flush()**, **reset()**, or **stop()** is called, do not continue to operate the OH_AVBuffer obtained through the previous callback function.
34- The DRM decryption capability supports both non-secure and secure video channels in [surface mode](#surface-output), but only non-secure video channels in buffer mode (#buffer-output).
35- The buffer mode and surface mode use the same APIs. Therefore, the surface mode is described as an example.
36- In buffer mode, after obtaining the pointer to an OH_AVBuffer object through the callback function **OH_AVCodecOnNewOutputBuffer**, call **OH_VideoDecoder_FreeOutputBuffer** to notify the system that the buffer has been fully utilized. In this way, the system can write the subsequently decoded data to the corresponding location. If the OH_NativeBuffer object is obtained through **OH_AVBuffer_GetNativeBuffer** and its lifecycle extends beyond that of the OH_AVBuffer pointer object, you mut perform data duplication. In this case, you should manage the lifecycle of the newly generated OH_NativeBuffer object to ensure that the object can be correctly used and released.
37
38## Surface Output and Buffer Output
39
40- Surface output and buffer output differ in data output modes.
41- They are applicable to different scenarios.
42  - Surface output indicates that the OHNativeWindow is used to transfer output data. It supports connection with other modules, such as the **XComponent**.
43  - Buffer output indicates that decoded data is output in shared memory mode.
44
45- The two also differ slightly in the API calling modes:
46- In surface mode, the caller can choose to call **OH_VideoDecoder_FreeOutputBuffer** to free the output buffer (without rendering the data). In buffer mode, the caller must call **OH_VideoDecoder_FreeOutputBuffer** to free the output buffer.
47- In surface mode, the caller must call **OH_VideoDecoder_SetSurface** to set an OHNativeWindow before the decoder is ready and call **OH_VideoDecoder_RenderOutputBuffer** to render the decoded data after the decoder is started.
48  - In buffer mode, an application can obtain the shared memory address and data from the output buffer. In surface mode, an application can obtain the data from the output buffer.
49
50For details about the development procedure, see [Surface Output](#surface-output) and [Buffer Output](#buffer-output).
51
52## State Machine Interaction
53The following figure shows the interaction between states.
54
55![Invoking relationship of state](figures/state-invocation.png)
56
57
581. A decoder enters the Initialized state in either of the following ways:
59   - When a decoder instance is initially created, the decoder enters the Initialized state.
60   - When **OH_VideoDecoder_Reset** is called in any state, the decoder returns to the Initialized state.
61
622. When the decoder is in the Initialized state, you can call **OH_VideoDecoder_Configure** to configure the decoder. After the configuration, the decoder enters the Configured state.
633. When the decoder is in the Configured state, you can call **OH_VideoDecoder_Prepare** to switch it to the Prepared state.
644. When the decoder is in the Prepared state, you can call **OH_VideoDecoder_Start** to switch it to the Executing state.
65   - When the decoder is in the Executing state, you can call **OH_VideoDecoder_Stop** to switch it back to the Prepared state.
66
675. In rare cases, the decoder may encounter an error and enter the Error state. If this is the case, an invalid value can be returned or an exception can be thrown through a queue operation.
68   - When the decoder is in the Error state, you can either call **OH_VideoDecoder_Reset** to switch it to the Initialized state or call **OH_VideoDecoder_Destroy** to switch it to the Released state.
69
706. The Executing state has three substates: Flushed, Running, and End-of-Stream.
71   - After **OH_VideoDecoder_Start** is called, the decoder enters the Running substate immediately.
72   - When the decoder is in the Executing state, you can call **OH_VideoDecoder_Flush** to switch it to the Flushed substate.
73   - After all data to be processed is transferred to the decoder, the **AVCODEC_BUFFER_FLAGS_EOS** flag is added to the last input buffer in the input buffers queue. Once this flag is detected, the decoder transits to the End-of-Stream substate. In this state, the decoder does not accept new inputs, but continues to generate outputs until it reaches the tail frame.
74
757. When the decoder is no longer needed, you must call **OH_VideoDecoder_Destroy** to destroy the decoder instance. Then the decoder enters the Released state.
76
77## How to Develop
78
79Read [VideoDecoder](../../reference/apis-avcodec-kit/_video_decoder.md) for the API reference.
80
81The figure below shows the call relationship of video decoding.
82
83- The dotted line indicates an optional operation.
84
85- The solid line indicates a mandatory operation.
86
87![Call relationship of video decoding](figures/video-decode.png)
88
89### Linking the Dynamic Link Libraries in the CMake Script
90
91``` cmake
92target_link_libraries(sample PUBLIC libnative_media_codecbase.so)
93target_link_libraries(sample PUBLIC libnative_media_core.so)
94target_link_libraries(sample PUBLIC libnative_media_vdec.so)
95```
96> **NOTE**
97>
98> The word 'sample' in the preceding code snippet is only an example. Use the actual project directory name.
99>
100
101### Surface Output
102
103The following walks you through how to implement the entire video decoding process in surface mode. In this example, an H.264 stream file is input, decoded, and rendered.
104
105Currently, the VideoDecoder module supports only data rotation in asynchronous mode.
106
1071. Add the header files.
108
109    ```c++
110    #include <multimedia/player_framework/native_avcodec_videodecoder.h>
111    #include <multimedia/player_framework/native_avcapability.h>
112    #include <multimedia/player_framework/native_avcodec_base.h>
113    #include <multimedia/player_framework/native_avformat.h>
114    #include <multimedia/player_framework/native_avbuffer.h>
115    #include <fstream>
116    ```
1172. Configure global variables.
118
119    ```c++
120    // (Mandatory) Configure the video frame width.
121    int32_t width = 320;
122    // (Mandatory) Configure the video frame height.
123    int32_t height = 240;
124    // Configure the video pixel format.
125    constexpr OH_AVPixelFormat DEFAULT_PIXELFORMAT = AV_PIXEL_FORMAT_NV12;
126    int32_t widthStride = 0;
127    int32_t heightStride = 0;
128    ```
129
1303. Create a decoder instance.
131
132    You can create a decoder by name or MIME type. In the code snippet below, the following variables are used:
133
134    - **videoDec**: pointer to the video decoder instance.
135    - **capability**: pointer to the decoder's capability.
136    - **OH_AVCODEC_MIMETYPE_VIDEO_AVC**: AVC video codec.
137
138    ```c++
139    // To create a decoder by name, call OH_AVCapability_GetName to obtain the codec names available and then call OH_VideoDecoder_CreateByName. If your application has special requirements, for example, expecting a decoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first.
140    OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false);
141    // Create hardware decoder instances.
142    OH_AVCapability *capability= OH_AVCodec_GetCapabilityByCategory(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false, HARDWARE);
143    const char *name = OH_AVCapability_GetName(capability);
144    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByName(name);
145    ```
146
147    ```c++
148    // Create a decoder by MIME type. Only specific codecs recommended by the system can be created in this way.
149    // If multiple codecs need to be created, create hardware decoder instances first. If the hardware resources are insufficient, create software decoder instances.
150    // Create an H.264 decoder for software/hardware decoding.
151    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC);
152    // Create an H.265 decoder for software/hardware decoding.
153    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_HEVC);
154    ```
155
1564. Call **OH_VideoDecoder_RegisterCallback()** to register the callback functions.
157
158    Register the **OH_AVCodecCallback** struct that defines the following callback function pointers:
159
160    - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror).
161    - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, stream width or height change.
162    - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required, which means that the decoder is ready for receiving data.
163    - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that decoding is complete. (Note: The **buffer** parameter in surface mode is null.)
164
165    You need to process the callback functions to ensure that the decoder runs properly.
166
167    <!--RP2--><!--RP2End-->
168
169    ```c++
170    // Implement the OH_AVCodecOnError callback function.
171    static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData)
172    {
173        // Process the error code in the callback.
174        (void)codec;
175        (void)errorCode;
176        (void)userData;
177    }
178
179    // Implement the OH_AVCodecOnStreamChanged callback function.
180    static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData)
181    {
182        // The changed video width, height, and stride can be obtained through format.
183        (void)codec;
184        (void)userData;
185        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_WIDTH, &width);
186        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_HEIGHT, &height);
187        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_STRIDE, &widthStride);
188        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_SLICE_HEIGHT, &heightStride);
189    }
190
191    // Implement the OH_AVCodecOnNeedInputBuffer callback function.
192    static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
193    {
194        // The index of the input frame buffer is sent to InIndexQueue.
195        // The input frame data (specified by buffer) is sent to InBufferQueue.
196        // Process the data.
197        // Write the stream to decode.
198    }
199
200    // Implement the OH_AVCodecOnNewOutputBuffer callback function.
201    static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
202    {
203        // The index of the output frame buffer is sent to outIndexQueue.
204        // The output frame data (specified by buffer) is sent to outBufferQueue.
205        // Process the data.
206        // Display and release decoded frames.
207    }
208    // Call OH_VideoDecoder_RegisterCallback() to register the callback functions.
209    OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer};
210    // Set the asynchronous callbacks.
211    int32_t ret = OH_VideoDecoder_RegisterCallback(videoDec, cb, NULL); // NULL: userData is null.
212    if (ret != AV_ERR_OK) {
213        // Exception handling.
214    }
215    ```
216    > **NOTE**
217    >
218    > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue.
219    >
220
2215. (Optional) Call **OH_VideoDecoder_SetDecryptionConfig** to set the decryption configuration. Call this API after the media key system information is obtained and a media key is obtained but before **Prepare()** is called. For details about how to obtain such information, see step 3 in [Audio and Video Demuxing](audio-video-demuxer.md).  In surface mode, the DRM decryption capability supports both secure and non-secure video channels. For details about DRM APIs, see [DRM](../../reference/apis-drm-kit/_drm.md).
222
223    Add the header files.
224
225    ```c++
226    #include <multimedia/drm_framework/native_mediakeysystem.h>
227    #include <multimedia/drm_framework/native_mediakeysession.h>
228    #include <multimedia/drm_framework/native_drm_err.h>
229    #include <multimedia/drm_framework/native_drm_common.h>
230    ```
231    Linking the Dynamic Libraries in the CMake Script
232
233    ``` cmake
234    target_link_libraries(sample PUBLIC libnative_drm.so)
235    ```
236
237    <!--RP4-->The following is the sample code:<!--RP4End-->
238
239    ```c++
240    // Create a media key system based on the media key system information. The following uses com.clearplay.drm as an example.
241    MediaKeySystem *system = nullptr;
242    int32_t ret = OH_MediaKeySystem_Create("com.clearplay.drm", &system);
243    if (system == nullptr) {
244        printf("create media key system failed");
245        return;
246    }
247
248    // Create a decryption session. If a secure video channel is used, create a MediaKeySession with the content protection level of CONTENT_PROTECTION_LEVEL_HW_CRYPTO or higher.
249    // To use a non-secure video channel, create a MediaKeySession with the content protection level of CONTENT_PROTECTION_LEVEL_SW_CRYPTO or higher.
250    MediaKeySession *session = nullptr;
251    DRM_ContentProtectionLevel contentProtectionLevel = CONTENT_PROTECTION_LEVEL_SW_CRYPTO;
252    ret = OH_MediaKeySystem_CreateMediaKeySession(system, &contentProtectionLevel, &session);
253    if (ret != DRM_OK) {
254        // If the creation fails, check the DRM interface document and logs.
255        printf("create media key session failed.");
256        return;
257    }
258    if (session == nullptr) {
259        printf("media key session is nullptr.");
260        return;
261    }
262
263    // Generate a media key request and set the response to the media key request.
264
265    // Set the decryption configuration, that is, set the decryption session and secure video channel flag to the decoder.
266    // If the DRM scheme supports a secure video channel, set secureVideoPath to true and create a secure decoder before using the channel.
267    // That is, in step 3, call OH_VideoDecoder_CreateByName, with a decoder name followed by .secure (for example, [CodecName].secure) passed in, to create a secure decoder.
268    bool secureVideoPath = false;
269    ret = OH_VideoDecoder_SetDecryptionConfig(videoDec, session, secureVideoPath);
270    ```
271
2726. Call **OH_VideoDecoder_Configure()** to configure the decoder.
273
274    For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs).
275
276    For details about the parameter verification rules, see [OH_VideoDecoder_Configure()](../../reference/apis-avcodec-kit/_video_decoder.md#oh_videodecoder_configure).
277
278    The parameter value ranges can be obtained through the capability query interface. For details, see [Obtaining Supported Codecs](obtain-supported-codecs.md).
279
280    Currently, the following options must be configured for all supported formats: video frame width, video frame height, and video pixel format. In the code snippet below, the following variables are used:
281
282    - **DEFAULT_WIDTH**: 320 pixels
283    - **DEFAULT_HEIGHT**: 240 pixels
284    - **DEFAULT_PIXELFORMAT**: **AV_PIXEL_FORMAT_NV12** (the pixel format of the YUV file is NV12)
285
286    ```c++
287
288    OH_AVFormat *format = OH_AVFormat_Create();
289    // Set the format.
290    OH_AVFormat_SetIntValue (format, OH_MD_KEY_WIDTH, width); // Mandatory
291    OH_AVFormat_SetIntValue(format, OH_MD_KEY_HEIGHT, height); // Mandatory
292    OH_AVFormat_SetIntValue(format, OH_MD_KEY_PIXEL_FORMAT, DEFAULT_PIXELFORMAT);
293    // (Optional) Configure low-latency decoding.
294    OH_AVFormat_SetIntValue(format, OH_MD_KEY_VIDEO_ENABLE_LOW_LATENCY, 1);
295    // Configure the decoder.
296    int32_t ret = OH_VideoDecoder_Configure(videoDec, format);
297    if (ret != AV_ERR_OK) {
298        // Exception handling.
299    }
300    OH_AVFormat_Destroy(format);
301    ```
302
3037. Set the surface. The application obtains the native window from the **XComponent**. For details about the process, see [XComponent](../../reference/apis-arkui/arkui-ts/ts-basic-components-xcomponent.md).
304
305    You perform this step during decoding, that is, dynamically switch the surface.
306
307    ```c++
308    // Set the window parameters.
309    int32_t ret = OH_VideoDecoder_SetSurface(videoDec, window); // Obtain the window from the XComponent.
310    if (ret != AV_ERR_OK) {
311        // Exception handling.
312    }
313    ```
314
3158. (Optional) Call **OH_VideoDecoder_SetParameter()** to set the surface parameters of the decoder.
316
317    For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs).
318
319    ```c++
320    OH_AVFormat *format = OH_AVFormat_Create();
321    // Configure the display rotation angle.
322    OH_AVFormat_SetIntValue(format, OH_MD_KEY_ROTATION, 90);
323    // Configure the matching mode (scaling or cropping) between the video and the screen.
324    OH_AVFormat_SetIntValue(format, OH_MD_KEY_SCALING_MODE, SCALING_MODE_SCALE_CROP);
325    int32_t ret = OH_VideoDecoder_SetParameter(videoDec, format);
326    OH_AVFormat_Destroy(format);
327    ```
328
3299. Call **OH_VideoDecoder_Prepare()** to prepare internal resources for the decoder.
330
331    ```c++
332    ret = OH_VideoDecoder_Prepare(videoDec);
333    if (ret != AV_ERR_OK) {
334        // Exception handling.
335    }
336    ```
337
33810. Call **OH_VideoDecoder_Start()** to start the decoder.
339
340    ```c++
341    std::string_view inputFilePath = "/*yourpath*.h264";
342    std::unique_ptr<std::ifstream> inputFile = std::make_unique<std::ifstream>();
343    inputFile->open(inputFilePath.data(), std::ios::in | std::ios::binary);
344    // Start the decoder.
345    int32_t ret = OH_VideoDecoder_Start(videoDec);
346    if (ret != AV_ERR_OK) {
347        // Exception handling.
348    }
349    ```
350
35111. (Optional) Call **OH_AVCencInfo_SetAVBuffer()** to set the Common Encryption Scheme (CENC) information.
352
353    If the program to play is DRM encrypted and the application implements media demuxing instead of using the system's [demuxer](audio-video-demuxer.md), you must call **OH_AVCencInfo_SetAVBuffer()** to set the CENC information to the AVBuffer. In this way, the AVBuffer carries the data to be decrypted and CENC information, so that the media data in the AVBuffer can be decrypted. You do not need to call this API when the application uses the system's [demuxer](audio-video-demuxer.md).
354
355    Add the header files.
356
357    ```c++
358    #include <multimedia/player_framework/native_cencinfo.h>
359    ```
360    Link the dynamic library in the CMake script.
361
362    ``` cmake
363    target_link_libraries(sample PUBLIC libnative_media_avcencinfo.so)
364    ```
365
366    In the code snippet below, the following variable is used:
367    - **buffer**: parameter passed by the callback function **OnNeedInputBuffer**. You can obtain the virtual address of the image by calling **OH_AVBuffer_GetAddr**.
368    ```c++
369    uint32_t keyIdLen = DRM_KEY_ID_SIZE;
370    uint8_t keyId[] = {
371        0xd4, 0xb2, 0x01, 0xe4, 0x61, 0xc8, 0x98, 0x96,
372        0xcf, 0x05, 0x22, 0x39, 0x8d, 0x09, 0xe6, 0x28};
373    uint32_t ivLen = DRM_KEY_IV_SIZE;
374    uint8_t iv[] = {
375        0xbf, 0x77, 0xed, 0x51, 0x81, 0xde, 0x36, 0x3e,
376        0x52, 0xf7, 0x20, 0x4f, 0x72, 0x14, 0xa3, 0x95};
377    uint32_t encryptedBlockCount = 0;
378    uint32_t skippedBlockCount = 0;
379    uint32_t firstEncryptedOffset = 0;
380    uint32_t subsampleCount = 1;
381    DrmSubsample subsamples[1] = { {0x10, 0x16} };
382    // Create a CencInfo instance.
383    OH_AVCencInfo *cencInfo = OH_AVCencInfo_Create();
384    if (cencInfo == nullptr) {
385        // Exception handling.
386    }
387    // Set the decryption algorithm.
388    OH_AVErrCode errNo = OH_AVCencInfo_SetAlgorithm(cencInfo, DRM_ALG_CENC_AES_CTR);
389    if (errNo != AV_ERR_OK) {
390        // Exception handling.
391    }
392    // Set KeyId and Iv.
393    errNo = OH_AVCencInfo_SetKeyIdAndIv(cencInfo, keyId, keyIdLen, iv, ivLen);
394    if (errNo != AV_ERR_OK) {
395        // Exception handling.
396    }
397    // Set the sample information.
398    errNo = OH_AVCencInfo_SetSubsampleInfo(cencInfo, encryptedBlockCount, skippedBlockCount, firstEncryptedOffset,
399        subsampleCount, subsamples);
400    if (errNo != AV_ERR_OK) {
401        // Exception handling.
402    }
403    // Set the mode. KeyId, Iv, and SubSamples have been set.
404    errNo = OH_AVCencInfo_SetMode(cencInfo, DRM_CENC_INFO_KEY_IV_SUBSAMPLES_SET);
405    if (errNo != AV_ERR_OK) {
406        // Exception handling.
407    }
408    // Set CencInfo to the AVBuffer.
409    errNo = OH_AVCencInfo_SetAVBuffer(cencInfo, buffer);
410    if (errNo != AV_ERR_OK) {
411        // Exception handling.
412    }
413    // Destroy the CencInfo instance.
414    errNo = OH_AVCencInfo_Destroy(cencInfo);
415    if (errNo != AV_ERR_OK) {
416        // Exception handling.
417    }
418    ```
419
42012. Call **OH_VideoDecoder_PushInputBuffer()** to push the stream to the input buffer for decoding.
421
422    In the code snippet below, the following variables are used:
423
424    - **buffer**: parameter passed by the callback function **OnNeedInputBuffer**. In surface mode, you cannot obtain the virtual address of the image by calling **OH_AVBuffer_GetAddr**.
425    - **index**: parameter passed by the callback function **OnNeedInputBuffer**, which uniquely corresponds to the buffer.
426    - **size**, **offset**, and **pts**: size, offset, and timestamp. For details about how to obtain the information, see [Audio and Video Demuxing](./audio-video-demuxer.md).
427    - **flags**: type of the buffer flag. For details, see [OH_AVCodecBufferFlags](../../reference/apis-avcodec-kit/_core.md#oh_avcodecbufferflags).
428
429    ```c++
430    // Configure the size, offset, and timestamp of the frame data.
431    OH_AVCodecBufferAttr info;
432    info.size = size;
433    info.offset = offset;
434    info.pts = pts;
435    info.flags = flags;
436    // Write the information to the buffer.
437    int32_t ret = OH_AVBuffer_SetBufferAttr(buffer, &info);
438    if (ret != AV_ERR_OK) {
439        // Exception handling.
440    }
441    // Send the data to the input buffer for decoding. index is the index of the buffer.
442    ret = OH_VideoDecoder_PushInputBuffer(videoDec, index);
443    if (ret != AV_ERR_OK) {
444        // Exception handling.
445    }
446    ```
447
44813. Call **OH_VideoDecoder_RenderOutputBuffer()** or **OH_VideoDecoder_RenderOutputBufferAtTime()** to render the data and free the output buffer, or call **OH_VideoDecoder_FreeOutputBuffer()** to directly free the output buffer.
449
450    In the code snippet below, the following variables are used:
451
452    - **index**: parameter passed by the callback function **OnNewOutputBuffer**, which uniquely corresponds to the buffer.
453    - **buffer**: parameter passed by the callback function **OnNewOutputBuffer**. In surface mode, you cannot obtain the virtual address of the image by calling **OH_AVBuffer_GetAddr**.
454
455    Add the header files.
456
457    ```c++
458    #include <chrono>
459    ```
460
461    ```c++
462    // Obtain the decoded information.
463    OH_AVCodecBufferAttr info;
464    int32_t ret = OH_AVBuffer_GetBufferAttr(buffer, &info);
465    if (ret != AV_ERR_OK) {
466        // Exception handling.
467    }
468    // The value is determined by the caller.
469    bool isRender;
470    bool isNeedRenderAtTime;
471    if (isRender) {
472        // Render the data and free the output buffer. index is the index of the buffer.
473        if (isNeedRenderAtTime){
474            // Obtain the system absolute time, and call renderTimestamp to display the time based on service requirements.
475            int64_t renderTimestamp =
476                chrono::duration_cast<chrono::nanoseconds>(chrono::high_resolution_clock::now().time_since_epoch()).count();
477            ret = OH_VideoDecoder_RenderOutputBufferAtTime(videoDec, index, renderTimestamp);
478        } else {
479           ret = OH_VideoDecoder_RenderOutputBuffer(videoDec, index);
480        }
481
482    } else {
483        // Free the output buffer.
484        ret = OH_VideoDecoder_FreeOutputBuffer(videoDec, index);
485    }
486    if (ret != AV_ERR_OK) {
487        // Exception handling.
488    }
489    ```
490
49114. (Optional) Call **OH_VideoDecoder_Flush()** to refresh the decoder.
492
493    After **OH_VideoDecoder_Flush** is called, the decoder remains in the Running state, but the input and output data and parameter set (such as the H.264 PPS/SPS) buffered in the decoder are cleared.
494
495    To continue decoding, you must call **OH_VideoDecoder_Start** again.
496
497    ```c++
498    // Refresh the decoder.
499    int32_t ret = OH_VideoDecoder_Flush(videoDec);
500    if (ret != AV_ERR_OK) {
501        // Exception handling.
502    }
503    // Start decoding again.
504    ret = OH_VideoDecoder_Start(videoDec);
505    if (ret != AV_ERR_OK) {
506        // Exception handling.
507    }
508    // Retransfer PPS/SPS.
509    // Configure the frame PPS/SPS information.
510    OH_AVCodecBufferAttr info;
511    info.flags = AVCODEC_BUFFER_FLAG_CODEC_DATA;
512    // Write the information to the buffer.
513    int32_t ret = OH_AVBuffer_SetBufferAttr(buffer, &info);
514    if (ret != AV_ERR_OK) {
515        // Exception handling.
516    }
517    // Push the frame data to the decoder. index is the index of the corresponding queue.
518    ret = OH_VideoDecoder_PushInputBuffer(videoDec, index);
519    if (ret != AV_ERR_OK) {
520        // Exception handling.
521    }
522    ```
523    > **NOTE**
524    >
525    > When **OH_VideoDecoder_Start** s called again after the flush operation, the PPS/SPS must be retransferred.
526
527
52815. (Optional) Call **OH_VideoDecoder_Reset()** to reset the decoder.
529
530    After **OH_VideoDecoder_Reset** is called, the decoder returns to the Initialized state. To continue decoding, you must call **OH_VideoDecoder_Configure**, **OH_VideoDecoder_Prepare**, and **OH_VideoDecoder_SetSurface** in sequence.
531
532    ```c++
533    // Reset the decoder.
534    int32_t ret = OH_VideoDecoder_Reset(videoDec);
535    if (ret != AV_ERR_OK) {
536        // Exception handling.
537    }
538    // Reconfigure the decoder.
539    ret = OH_VideoDecoder_Configure(videoDec, format);
540    if (ret != AV_ERR_OK) {
541        // Exception handling.
542    }
543    // The decoder is ready again.
544    ret = OH_VideoDecoder_Prepare(videoDec);
545    if (ret != AV_ERR_OK) {
546        // Exception handling.
547    }
548    // Reconfigure the surface in surface mode. This is not required in buffer mode.
549    ret = OH_VideoDecoder_SetSurface(videoDec, window);
550    if (ret != AV_ERR_OK) {
551        // Exception handling.
552    }
553    ```
554
55516. (Optional) Call **OH_VideoDecoder_Stop()** to stop the decoder.
556
557    After **OH_VideoDecoder_Stop()** is called, the decoder retains the decoding instance and releases the input and output buffers. You can directly call **OH_VideoDecoder_Start** to continue decoding. The first input buffer must carry the parameter set, starting from the IDR frame.
558
559    ```c++
560    // Stop the decoder.
561    int32_t ret = OH_VideoDecoder_Stop(videoDec);
562    if (ret != AV_ERR_OK) {
563        // Exception handling.
564    }
565    ```
566
56717. Call **OH_VideoDecoder_Destroy()** to destroy the decoder instance and release resources.
568
569    > **NOTE**
570    >
571    > This API cannot be called in the callback function.
572    > After the call, you must set the decoder to NULL to prevent program errors caused by wild pointers.
573    >
574
575    ```c++
576    // Call OH_VideoDecoder_Destroy to destroy the decoder.
577    if (videoDec != NULL) {
578        int32_t ret = OH_VideoDecoder_Destroy(videoDec);
579        videoDec = NULL;
580    }
581    if (ret != AV_ERR_OK) {
582        // Exception handling.
583    }
584    ```
585
586### Buffer Output
587
588The following walks you through how to implement the entire video decoding process in buffer mode. In this example, an H.264 file is input and decoded into a YUV file.
589Currently, the VideoDecoder module supports only data rotation in asynchronous mode.
590
5911. Add the header files.
592
593    ```c++
594    #include <multimedia/player_framework/native_avcodec_videodecoder.h>
595    #include <multimedia/player_framework/native_avcapability.h>
596    #include <multimedia/player_framework/native_avcodec_base.h>
597    #include <multimedia/player_framework/native_avformat.h>
598    #include <multimedia/player_framework/native_avbuffer.h>
599    #include <native_buffer/native_buffer.h>
600    #include <fstream>
601    ```
602
6032. Create a decoder instance.
604
605    The procedure is the same as that in surface mode and is not described here.
606
607    ```c++
608    // To create a decoder by name, call OH_AVCapability_GetName to obtain the codec names available and then call OH_VideoDecoder_CreateByName. If your application has special requirements, for example, expecting a decoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first.
609    OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false);
610    const char *name = OH_AVCapability_GetName(capability);
611    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByName(name);
612    ```
613
614    ```c++
615    // Create a decoder by MIME type. Only specific codecs recommended by the system can be created in this way.
616    // If multiple codecs need to be created, create hardware decoder instances first. If the hardware resources are insufficient, create software decoder instances.
617    // Create an H.264 decoder for software/hardware decoding.
618    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC);
619    // Create an H.265 decoder for hardware decoding.
620    OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_HEVC);
621    ```
622
6233. Call **OH_VideoDecoder_RegisterCallback()** to register the callback functions.
624
625    Register the **OH_AVCodecCallback** struct that defines the following callback function pointers:
626
627    - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror).
628    - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, stream width or height change.
629    - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required, which means that the decoder is ready for receiving data.
630    - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that decoding is complete.
631
632    You need to process the callback functions to ensure that the decoder runs properly.
633
634    <!--RP2--><!--RP2End-->
635
636    ```c++
637    int32_t cropTop = 0;
638    int32_t cropBottom = 0;
639    int32_t cropLeft = 0;
640    int32_t cropRight = 0;
641    bool isFirstFrame = true;
642    // Implement the OH_AVCodecOnError callback function.
643    static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData)
644    {
645        // Process the error code in the callback.
646        (void)codec;
647        (void)errorCode;
648        (void)userData;
649    }
650
651    // Implement the OH_AVCodecOnStreamChanged callback function.
652    static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData)
653    {
654        // Optional. Configure the data when you want to obtain the video width, height, and stride.
655        // The changed video width, height, and stride can be obtained through format.
656        (void)codec;
657        (void)userData;
658        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_WIDTH, &width);
659        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_HEIGHT, &height);
660        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_STRIDE, &widthStride);
661        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_SLICE_HEIGHT, &heightStride);
662        // (Optional) Obtain the cropped rectangle information.
663        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_TOP, &cropTop);
664        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_BOTTOM, &cropBottom);
665        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_LEFT, &cropLeft);
666        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_RIGHT, &cropRight);
667    }
668
669    // Implement the OH_AVCodecOnNeedInputBuffer callback function.
670    static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
671    {
672        // The index of the input frame buffer is sent to InIndexQueue.
673        // The input frame data (specified by buffer) is sent to InBufferQueue.
674        // Process the data.
675        // Write the stream to decode.
676    }
677
678    // Implement the OH_AVCodecOnNewOutputBuffer callback function.
679    static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
680    {
681        // Optional. Configure the data when you want to obtain the video width, height, and stride.
682        // The index of the output frame buffer is sent to outIndexQueue.
683        // The output frame data (specified by buffer) is sent to outBufferQueue.
684        // Obtain the video width, height, and stride.
685        if (isFirstFrame) {
686            OH_AVFormat *format = OH_VideoDecoder_GetOutputDescription(codec);
687            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_WIDTH, &width);
688            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_HEIGHT, &height);
689            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_STRIDE, &widthStride);
690            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_SLICE_HEIGHT, &heightStride);
691            // (Optional) Obtain the cropped rectangle information.
692            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_TOP, &cropTop);
693            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_BOTTOM, &cropBottom);
694            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_LEFT, &cropLeft);
695            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_RIGHT, &cropRight);
696            OH_AVFormat_Destroy(format);
697            isFirstFrame = false;
698        }
699        // Process the data.
700        // Release the decoded frame.
701    }
702    // Call OH_VideoDecoder_RegisterCallback() to register the callback functions.
703    OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer};
704    // Set the asynchronous callbacks.
705    int32_t ret = OH_VideoDecoder_RegisterCallback(videoDec, cb, NULL); // NULL: userData is null.
706    if (ret != AV_ERR_OK) {
707        // Exception handling.
708    }
709    ```
710    > **NOTE**
711    >
712    > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue.
713    >
714
7154. (Optional) Call **OH_VideoDecoder_SetDecryptionConfig** to set the decryption configuration. Call this API after the media key system information is obtained and a media key is obtained but before **Prepare()** is called. For details about how to obtain such information, see step 3 in [Audio and Video Demuxing](audio-video-demuxer.md).  In buffer mode, the DRM decryption capability supports only non-secure video channels. For details about DRM APIs, see [DRM](../../reference/apis-drm-kit/_drm.md).
716
717    Add the header files.
718
719    ```c++
720    #include <multimedia/drm_framework/native_mediakeysystem.h>
721    #include <multimedia/drm_framework/native_mediakeysession.h>
722    #include <multimedia/drm_framework/native_drm_err.h>
723    #include <multimedia/drm_framework/native_drm_common.h>
724    ```
725    Link the dynamic library in the CMake script.
726
727    ``` cmake
728    target_link_libraries(sample PUBLIC libnative_drm.so)
729    ```
730
731    The following is the sample code:
732    ```c++
733    // Create a media key system based on the media key system information. The following uses com.clearplay.drm as an example.
734    MediaKeySystem *system = nullptr;
735    int32_t ret = OH_MediaKeySystem_Create("com.clearplay.drm", &system);
736    if (system == nullptr) {
737        printf("create media key system failed");
738        return;
739    }
740
741    // Create a media key session.
742    // To use a non-secure video channel, create a MediaKeySession with the content protection level of CONTENT_PROTECTION_LEVEL_SW_CRYPTO or higher.
743    MediaKeySession *session = nullptr;
744    DRM_ContentProtectionLevel contentProtectionLevel = CONTENT_PROTECTION_LEVEL_SW_CRYPTO;
745    ret = OH_MediaKeySystem_CreateMediaKeySession(system, &contentProtectionLevel, &session);
746    if (ret != DRM_OK) {
747        // If the creation fails, check the DRM interface document and logs.
748        printf("create media key session failed.");
749        return;
750    }
751    if (session == nullptr) {
752        printf("media key session is nullptr.");
753        return;
754    }
755    // Generate a media key request and set the response to the media key request.
756    // Set the decryption configuration, that is, set the decryption session and secure video channel flag to the decoder.
757    bool secureVideoPath = false;
758    ret = OH_VideoDecoder_SetDecryptionConfig(videoDec, session, secureVideoPath);
759    ```
760
7615. Call **OH_VideoDecoder_Configure()** to configure the decoder.
762
763    The procedure is the same as that in surface mode and is not described here.
764
765    ```c++
766    OH_AVFormat *format = OH_AVFormat_Create();
767    // Set the format.
768    OH_AVFormat_SetIntValue(format, OH_MD_KEY_WIDTH, width);
769    OH_AVFormat_SetIntValue(format, OH_MD_KEY_HEIGHT, height);
770    OH_AVFormat_SetIntValue(format, OH_MD_KEY_PIXEL_FORMAT, DEFAULT_PIXELFORMAT);
771    // Configure the decoder.
772    int32_t ret = OH_VideoDecoder_Configure(videoDec, format);
773    if (ret != AV_ERR_OK) {
774        // Exception handling.
775    }
776    OH_AVFormat_Destroy(format);
777    ```
778
7796. Call **OH_VideoDecoder_Prepare()** to prepare internal resources for the decoder.
780
781    ```c++
782    int32_t ret = OH_VideoDecoder_Prepare(videoDec);
783    if (ret != AV_ERR_OK) {
784        // Exception handling.
785    }
786    ```
787
7887. Call **OH_VideoDecoder_Start()** to start the decoder.
789
790    ```c++
791    std::string_view inputFilePath = "/*yourpath*.h264";
792    std::string_view outputFilePath = "/*yourpath*.yuv";
793    std::unique_ptr<std::ifstream> inputFile = std::make_unique<std::ifstream>();
794    std::unique_ptr<std::ofstream> outputFile = std::make_unique<std::ofstream>();
795    inputFile->open(inputFilePath.data(), std::ios::in | std::ios::binary);
796    outputFile->open(outputFilePath.data(), std::ios::out | std::ios::binary | std::ios::ate);
797    // Start the decoder.
798    int32_t ret = OH_VideoDecoder_Start(videoDec);
799    if (ret != AV_ERR_OK) {
800        // Exception handling.
801    }
802    ```
803
8048. (Optional) Call **OH_AVCencInfo_SetAVBuffer()** to set the CENC information.
805
806    The procedure is the same as that in surface mode and is not described here.
807
808    The following is the sample code:
809    ```c++
810    uint32_t keyIdLen = DRM_KEY_ID_SIZE;
811    uint8_t keyId[] = {
812        0xd4, 0xb2, 0x01, 0xe4, 0x61, 0xc8, 0x98, 0x96,
813        0xcf, 0x05, 0x22, 0x39, 0x8d, 0x09, 0xe6, 0x28};
814    uint32_t ivLen = DRM_KEY_IV_SIZE;
815    uint8_t iv[] = {
816        0xbf, 0x77, 0xed, 0x51, 0x81, 0xde, 0x36, 0x3e,
817        0x52, 0xf7, 0x20, 0x4f, 0x72, 0x14, 0xa3, 0x95};
818    uint32_t encryptedBlockCount = 0;
819    uint32_t skippedBlockCount = 0;
820    uint32_t firstEncryptedOffset = 0;
821    uint32_t subsampleCount = 1;
822    DrmSubsample subsamples[1] = { {0x10, 0x16} };
823    // Create a CencInfo instance.
824    OH_AVCencInfo *cencInfo = OH_AVCencInfo_Create();
825    if (cencInfo == nullptr) {
826        // Exception handling.
827    }
828    // Set the decryption algorithm.
829    OH_AVErrCode errNo = OH_AVCencInfo_SetAlgorithm(cencInfo, DRM_ALG_CENC_AES_CTR);
830    if (errNo != AV_ERR_OK) {
831        // Exception handling.
832    }
833    // Set KeyId and Iv.
834    errNo = OH_AVCencInfo_SetKeyIdAndIv(cencInfo, keyId, keyIdLen, iv, ivLen);
835    if (errNo != AV_ERR_OK) {
836        // Exception handling.
837    }
838    // Set the sample information.
839    errNo = OH_AVCencInfo_SetSubsampleInfo(cencInfo, encryptedBlockCount, skippedBlockCount, firstEncryptedOffset,
840        subsampleCount, subsamples);
841    if (errNo != AV_ERR_OK) {
842        // Exception handling.
843    }
844    // Set the mode. KeyId, Iv, and SubSamples have been set.
845    errNo = OH_AVCencInfo_SetMode(cencInfo, DRM_CENC_INFO_KEY_IV_SUBSAMPLES_SET);
846    if (errNo != AV_ERR_OK) {
847        // Exception handling.
848    }
849    // Set CencInfo to the AVBuffer.
850    errNo = OH_AVCencInfo_SetAVBuffer(cencInfo, buffer);
851    if (errNo != AV_ERR_OK) {
852        // Exception handling.
853    }
854    // Destroy the CencInfo instance.
855    errNo = OH_AVCencInfo_Destroy(cencInfo);
856    if (errNo != AV_ERR_OK) {
857        // Exception handling.
858    }
859    ```
860
8619. Call **OH_VideoDecoder_PushInputBuffer()** to push the stream to the input buffer for decoding.
862
863    The procedure is the same as that in surface mode and is not described here.
864
865    ```c++
866    // Configure the size, offset, and timestamp of the frame data.
867    OH_AVCodecBufferAttr info;
868    info.size = size;
869    info.offset = offset;
870    info.pts = pts;
871    info.flags = flags;
872    // Write the information to the buffer.
873    ret = OH_AVBuffer_SetBufferAttr(buffer, &info);
874    if (ret != AV_ERR_OK) {
875        // Exception handling.
876    }
877    // Send the data to the input buffer for decoding. index is the index of the buffer.
878    int32_t ret = OH_VideoDecoder_PushInputBuffer(videoDec, index);
879    if (ret != AV_ERR_OK) {
880        // Exception handling.
881    }
882    ```
883
88410. Call **OH_VideoDecoder_FreeOutputBuffer()** to release decoded frames.
885
886    In the code snippet below, the following variables are used:
887
888    - **index**: parameter passed by the callback function **OnNewOutputBuffer**, which uniquely corresponds to the buffer.
889    - **buffer**: parameter passed by the callback function **OnNewOutputBuffer**. You can obtain the virtual address of the image by calling **OH_AVBuffer_GetAddr**.
890
891    ```c++
892    // Obtain the decoded information.
893    OH_AVCodecBufferAttr info;
894    int32_t ret = OH_AVBuffer_GetBufferAttr(buffer, &info);
895    if (ret != AV_ERR_OK) {
896        // Exception handling.
897    }
898    // Write the decoded data (specified by data) to the output file.
899    outputFile->write(reinterpret_cast<char *>(OH_AVBuffer_GetAddr(buffer)), info.size);
900    // Free the buffer that stores the output data. index is the index of the buffer.
901    ret = OH_VideoDecoder_FreeOutputBuffer(videoDec, index);
902    if (ret != AV_ERR_OK) {
903        // Exception handling.
904    }
905    ```
906
907    To copy the Y, U, and V components of an NV12 or NV21 image to another buffer in sequence, perform the following steps (taking an NV12 image as an example), presenting the image layout of **width**, **height**, **wStride**, and **hStride**.
908
909    - **OH_MD_KEY_VIDEO_PIC_WIDTH** corresponds to **width**.
910    - **OH_MD_KEY_VIDEO_PIC_HEIGHT** corresponds to **height**.
911    - **OH_MD_KEY_VIDEO_STRIDE** corresponds to **wStride**.
912    - **OH_MD_KEY_VIDEO_SLICE_HEIGHT** corresponds to **hStride**.
913
914    ![copy by line](figures/copy-by-line.png)
915
916    Add the header files.
917
918    ```c++
919    #include <string.h>
920    ```
921    The following is the sample code:
922
923    ```c++
924    struct Rect // Width and height of the source buffer. They are obtained by calling OnNewOutputBuffer.
925    {
926        int32_t width;
927        int32_t height;
928    };
929
930    struct DstRect // Width stride and height stride of the destination buffer. They are set by the caller.
931    {
932        int32_t wStride;
933        int32_t hStride;
934    };
935
936    struct SrcRect // Width stride and height stride of the source buffer. They are obtained by calling OnNewOutputBuffer.
937    {
938        int32_t wStride;
939        int32_t hStride;
940    };
941
942    Rect rect = {320, 240};
943    DstRect dstRect = {320, 250};
944    SrcRect srcRect = {320, 250};
945    uint8_t* dst = new uint8_t[dstRect.hStride * dstRect.wStride]; // Pointer to the target memory area.
946    uint8_t* src = new uint8_t[srcRect.hStride * srcRect.wStride]; // Pointer to the source memory area.
947
948    // Y: Copy the source data in the Y region to the target data in another region.
949    for (int32_t i = 0; i < rect.height; ++i) {
950        // Copy a row of data from the source to a row of the target.
951        memcpy_s(dst, src, rect.width);
952        // Update the pointers to the source data and target data to copy the next row. The pointers to the source data and target data are moved downwards by one wStride each time the source data and target data are updated.
953        dst += dstRect.wStride;
954        src += srcRect.wStride;
955    }
956    // padding
957    // Update the pointers to the source data and target data. The pointers move downwards by one padding.
958    dst += (dstRect.hStride - rect.height) * dstRect.wStride;
959    src += (srcRect.hStride - rect.height) * srcRect.wStride;
960    rect.height >>= 1;
961    // UV: Copy the source data in the UV region to the target data in another region.
962    for (int32_t i = 0; i < rect.height; ++i) {
963        memcpy_s(dst, src, rect.width);
964        dst += dstRect.wStride;
965        src += srcRect.wStride;
966    }
967
968    delete[] dst;
969    dst = nullptr;
970    delete[] src;
971    src = nullptr;
972    ```
973
974    When processing buffer data (before releasing data) during hardware decoding, the output callback AVBuffer receives the image data after width and height alignment. Generally, copy the image width, height, stride, and pixel format to ensure correct processing of the decoded data. For details, see step 3 in [Buffer Output](#buffer-output).
975
976The subsequent processes (including refreshing, resetting, stopping, and destroying the decoder) are basically the same as those in surface mode. For details, see steps 14-17 in [Surface Output](#surface-output).
977
978<!--RP5-->
979<!--RP5End-->
980