1# Video Decoding 2 3You can call the native APIs provided by the VideoDecoder module to decode video, that is, to decode media data into a YUV file or render it. 4 5<!--RP3--><!--RP3End--> 6 7For details about the supported decoding capabilities, see [AVCodec Supported Formats](avcodec-support-formats.md#video-decoding). 8 9<!--RP1--><!--RP1End--> 10 11Through the VideoDecoder module, your application can implement the following key capabilities. 12 13| Capability | How to Configure | 14| --------------------------------------- | ---------------------------------------------------------------------------------- | 15| Variable resolution | The decoder supports the change of the input stream resolution. After the resolution is changed, the callback function **OnStreamChanged()** set by **OH_VideoDecoder_RegisterCallback** is triggered. For details, see step 3 in surface mode or step 3 in buffer mode. | 16| Dynamic surface switching | Call **OH_VideoDecoder_SetSurface** to configure this capability. It is supported only in surface mode. For details, see step 6 in surface mode. | 17| Low-latency decoding | Call **OH_VideoDecoder_Configure** to configure this capability. For details, see step 5 in surface mode or step 5 in buffer mode. | 18 19## Constraints 20 21- HDR Vivid decoding is not supported in buffer mode. 22- After **flush()**, **reset()**, or **stop()** is called, the PPS/SPS must be transferred again in the **start()** call. For details about the example, see step 13 in [Surface Output](#surface-output). 23- If **flush()**, **reset()**, **stop()**, or **destroy()** is executed in a non-callback thread, the execution result is returned after all callbacks are executed. 24- Due to limited hardware decoder resources, you must call **OH_VideoDecoder_Destroy** to destroy every decoder instance when it is no longer needed. 25- The input streams for video decoding support only the AnnexB format, and the supported AnnexB format supports multiple slices. However, the slices of the same frame must be sent to the decoder at a time. 26- When **flush()**, **reset()**, or **stop()** is called, do not continue to operate the OH_AVBuffer obtained through the previous callback function. 27- The DRM decryption capability supports both non-secure and secure video channels in [surface mode](#surface-output), but only non-secure video channels in buffer mode (#buffer-output). 28- The buffer mode and surface mode use the same APIs. Therefore, the surface mode is described as an example. 29- In buffer mode, after obtaining the pointer to an OH_AVBuffer instance through the callback function **OH_AVCodecOnNewOutputBuffer**, call **OH_VideoDecoder_FreeOutputBuffer** to notify the system that the buffer has been fully utilized. In this way, the system can write the subsequently decoded data to the corresponding location. If the OH_NativeBuffer instance is obtained through **OH_AVBuffer_GetNativeBuffer** and its lifecycle extends beyond that of the OH_AVBuffer pointer instance, you mut perform data duplication. In this case, you should manage the lifecycle of the newly generated OH_NativeBuffer object to ensure that the object can be correctly used and released. 30<!--RP6--><!--RP6End--> 31 32## Surface Output and Buffer Output 33 34- Surface output and buffer output differ in data output modes. 35- They are applicable to different scenarios. 36 - Surface output indicates that the OHNativeWindow is used to transfer output data. It supports connection with other modules, such as the **XComponent**. 37 - Buffer output indicates that decoded data is output in shared memory mode. 38 39- The two also differ slightly in the API calling modes: 40 - In surface mode, the caller can choose to call **OH_VideoDecoder_FreeOutputBuffer** to free the output buffer (without rendering the data). In buffer mode, the caller must call **OH_VideoDecoder_FreeOutputBuffer** to free the output buffer. 41 - In surface mode, the caller must call **OH_VideoDecoder_SetSurface** to set an OHNativeWindow before the decoder is ready and call **OH_VideoDecoder_RenderOutputBuffer** to render the decoded data after the decoder is started. 42 - In buffer mode, an application can obtain the shared memory address and data from the output buffer. In surface mode, an application can obtain the data from the output buffer. 43 44For details about the development procedure, see [Surface Output](#surface-output) and [Buffer Output](#buffer-output). 45 46## State Machine Interaction 47 48The following figure shows the interaction between states. 49 50 51 521. A decoder enters the Initialized state in either of the following ways: 53 - When a decoder instance is initially created, the decoder enters the Initialized state. 54 - When **OH_VideoDecoder_Reset** is called in any state, the decoder returns to the Initialized state. 55 562. When the decoder is in the Initialized state, you can call **OH_VideoDecoder_Configure** to configure the decoder. After the configuration, the decoder enters the Configured state. 573. When the decoder is in the Configured state, you can call **OH_VideoDecoder_Prepare** to switch it to the Prepared state. 584. When the decoder is in the Prepared state, you can call **OH_VideoDecoder_Start** to switch it to the Executing state. 59 - When the decoder is in the Executing state, you can call **OH_VideoDecoder_Stop** to switch it back to the Prepared state. 60 615. In rare cases, the decoder may encounter an error and enter the Error state. If this is the case, an invalid value can be returned or an exception can be thrown through a queue operation. 62 - When the decoder is in the Error state, you can either call **OH_VideoDecoder_Reset** to switch it to the Initialized state or call **OH_VideoDecoder_Destroy** to switch it to the Released state. 63 646. The Executing state has three substates: Flushed, Running, and End-of-Stream. 65 - After **OH_VideoDecoder_Start** is called, the decoder enters the Running substate immediately. 66 - When the decoder is in the Executing state, you can call **OH_VideoDecoder_Flush** to switch it to the Flushed substate. 67 - After all data to be processed is transferred to the decoder, the [AVCODEC_BUFFER_FLAGS_EOS](../../reference/apis-avcodec-kit/_core.md#oh_avcodecbufferflags-1) flag is added to the last input buffer in the input buffers queue. Once this flag is detected, the decoder transits to the End-of-Stream substate. In this state, the decoder does not accept new inputs, but continues to generate outputs until it reaches the tail frame. 68 697. When the decoder is no longer needed, you must call **OH_VideoDecoder_Destroy** to destroy the decoder instance, which then transitions to the Released state. 70 71## How to Develop 72 73Read [VideoDecoder](../../reference/apis-avcodec-kit/_video_decoder.md) for the API reference. 74 75The figure below shows the call relationship of video decoding. 76 77- The dotted line indicates an optional operation. 78 79- The solid line indicates a mandatory operation. 80 81 82 83### Linking the Dynamic Link Libraries in the CMake Script 84 85``` cmake 86target_link_libraries(sample PUBLIC libnative_media_codecbase.so) 87target_link_libraries(sample PUBLIC libnative_media_core.so) 88target_link_libraries(sample PUBLIC libnative_media_vdec.so) 89``` 90 91> **NOTE** 92> 93> The word **sample** in the preceding code snippet is only an example. Use the actual project directory name. 94> 95 96### Defining the Basic Structure 97 98The sample code provided in this section adheres to the C++17 standard and is for reference only. You can define your own buffer objects by referring to it. 99 1001. Add the header files. 101 102 ```c++ 103 #include <condition_variable> 104 #include <memory> 105 #include <mutex> 106 #include <queue> 107 #include <shared_mutex> 108 ``` 109 1102. Define the information about the decoder callback buffer. 111 112 ```c++ 113 struct CodecBufferInfo { 114 CodecBufferInfo(uint32_t index, OH_AVBuffer *buffer): index(index), buffer(buffer), isValid(true) {} 115 // Callback buffer. 116 OH_AVBuffer *buffer = nullptr; 117 // Index of the callback buffer. 118 uint32_t index = 0; 119 // Check whether the current buffer information is valid. 120 bool isValid = true; 121 }; 122 ``` 123 1243. Define the input and output queue for decoding. 125 126 ```c++ 127 class CodecBufferQueue { 128 public: 129 // Pass the callback buffer information to the queue. 130 void Enqueue(const std::shared_ptr<CodecBufferInfo> bufferInfo) 131 { 132 std::unique_lock<std::mutex> lock(mutex_); 133 bufferQueue_.push(bufferInfo); 134 cond_.notify_all(); 135 } 136 137 // Obtain the information about the callback buffer. 138 std::shared_ptr<CodecBufferInfo> Dequeue(int32_t timeoutMs = 1000) 139 { 140 std::unique_lock<std::mutex> lock(mutex_); 141 (void)cond_.wait_for(lock, std::chrono::milliseconds(timeoutMs), [this]() { return !bufferQueue_.empty(); }); 142 if (bufferQueue_.empty()) { 143 return nullptr; 144 } 145 std::shared_ptr<CodecBufferInfo> bufferInfo = bufferQueue_.front(); 146 bufferQueue_.pop(); 147 return bufferInfo; 148 } 149 150 // Clear the queue. The previous callback buffer becomes unavailable. 151 void Flush() 152 { 153 std::unique_lock<std::mutex> lock(mutex_); 154 while (!bufferQueue_.empty()) { 155 std::shared_ptr<CodecBufferInfo> bufferInfo = bufferQueue_.front(); 156 // After the flush, stop, reset, and destroy operations are performed, the previous callback buffer information is invalid. 157 bufferInfo->isValid = false; 158 bufferQueue_.pop(); 159 } 160 } 161 162 private: 163 std::mutex mutex_; 164 std::condition_variable cond_; 165 std::queue<std::shared_ptr<CodecBufferInfo>> bufferQueue_; 166 }; 167 ``` 168 1694. Define global variables. 170 171 These global variables are for reference only. They can be encapsulated into an object based on service requirements. 172 173 ```c++ 174 // Video frame width. 175 int32_t width = 320; 176 // Video frame height. 177 int32_t height = 240; 178 // Video pixel format. 179 OH_AVPixelFormat pixelFormat = AV_PIXEL_FORMAT_NV12; 180 // Video width stride. 181 int32_t widthStride = 0; 182 // Video height stride. 183 int32_t heightStride = 0; 184 // Pointer to the decoder instance. 185 OH_AVCodec *videoDec = nullptr; 186 // Decoder synchronization lock. 187 std::shared_mutex codecMutex; 188 // Decoder input queue. 189 CodecBufferQueue inQueue; 190 // Decoder output queue. 191 CodecBufferQueue outQueue; 192 ``` 193 194### Surface Output 195 196The following walks you through how to implement the entire video decoding process in surface mode. In this example, an H.264 stream file is input, decoded, and rendered. 197 198Currently, the VideoDecoder module supports only data rotation in asynchronous mode. 199 2001. Add the header files. 201 202 ```c++ 203 #include <multimedia/player_framework/native_avcodec_videodecoder.h> 204 #include <multimedia/player_framework/native_avcapability.h> 205 #include <multimedia/player_framework/native_avcodec_base.h> 206 #include <multimedia/player_framework/native_avformat.h> 207 #include <multimedia/player_framework/native_avbuffer.h> 208 #include <fstream> 209 ``` 210 2112. Create a decoder instance. 212 213 You can create a decoder by name or MIME type. In the code snippet below, the following variables are used: 214 215 - **videoDec**: pointer to the video decoder instance. 216 - **capability**: pointer to the decoder's capability. 217 - **OH_AVCODEC_MIMETYPE_VIDEO_AVC**: AVC video codec. 218 219 ```c++ 220 // To create a decoder by name, call OH_AVCapability_GetName to obtain the codec names available and then call OH_VideoDecoder_CreateByName. If your application has special requirements, for example, expecting a decoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first. 221 OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false); 222 // Create hardware decoder instances. 223 OH_AVCapability *capability= OH_AVCodec_GetCapabilityByCategory(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false, HARDWARE); 224 const char *name = OH_AVCapability_GetName(capability); 225 OH_AVCodec *videoDec = OH_VideoDecoder_CreateByName(name); 226 ``` 227 228 ```c++ 229 // Create a decoder by MIME type. Only specific codecs recommended by the system can be created in this way. 230 // If multiple codecs need to be created, create hardware decoder instances first. If the hardware resources are insufficient, create software decoder instances. 231 // Create an H.264 decoder for software/hardware decoding. 232 OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC); 233 // Create an H.265 decoder for software/hardware decoding. 234 OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_HEVC); 235 ``` 236 2373. Call **OH_VideoDecoder_RegisterCallback()** to register the callback functions. 238 239 Register the **OH_AVCodecCallback** struct that defines the following callback function pointers: 240 241 - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror). 242 - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, stream width or height change. 243 - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required, which means that the decoder is ready for receiving data. 244 - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that decoding is complete. (Note: The **buffer** parameter in surface mode is null.) 245 246 You need to process the callback functions to ensure that the decoder runs properly. 247 248 <!--RP2--><!--RP2End--> 249 250 ```c++ 251 // Implement the OH_AVCodecOnError callback function. 252 static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData) 253 { 254 // Process the error code in the callback. 255 (void)codec; 256 (void)errorCode; 257 (void)userData; 258 } 259 260 // Implement the OH_AVCodecOnStreamChanged callback function. 261 static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData) 262 { 263 // The changed video width and height can be obtained through format. 264 (void)codec; 265 (void)userData; 266 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_WIDTH, &width); 267 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_HEIGHT, &height); 268 } 269 270 // Implement the OH_AVCodecOnNeedInputBuffer callback function. 271 static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData) 272 { 273 // The data buffer of the input frame and its index are sent to inQueue. 274 (void)codec; 275 (void)userData; 276 inQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer)); 277 } 278 279 // Implement the OH_AVCodecOnNewOutputBuffer callback function. 280 static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData) 281 { 282 // The data buffer of the finished frame and its index are sent to outQueue. 283 (void)codec; 284 (void)userData; 285 outQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer)); 286 } 287 // Call OH_VideoDecoder_RegisterCallback() to register the callback functions. 288 OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer}; 289 // Set the asynchronous callbacks. 290 int32_t ret = OH_VideoDecoder_RegisterCallback(videoDec, cb, nullptr); // nullptr: userData is null. 291 if (ret != AV_ERR_OK) { 292 // Handle exceptions. 293 } 294 ``` 295 296 > **NOTE** 297 > 298 > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue. 299 > 300 > During video playback, if the SPS of the video stream contains color information, the decoder will return the information (RangeFlag, ColorPrimary, MatrixCoefficient, and TransferCharacteristic) through the **OH_AVFormat** parameter in the **OH_AVCodecOnStreamChanged** callback. 301 > 302 > In surface mode of video decoding, the internal data is processed by using High Efficiency Bandwidth Compression (HEBC) by default, and the values of **widthStride** and **heightStride** cannot be obtained. 303 3044. (Optional) Call **OH_VideoDecoder_SetDecryptionConfig** to set the decryption configuration. Call this API after the media key system information is obtained and a media key is obtained but before **Prepare()** is called. For details about how to obtain such information, see step 4 in [Media Data Demuxing](audio-video-demuxer.md). In surface mode, the DRM decryption capability supports both secure and non-secure video channels. For details about DRM APIs, see [DRM](../../reference/apis-drm-kit/_drm.md). 305 306 Add the header files. 307 308 ```c++ 309 #include <multimedia/drm_framework/native_mediakeysystem.h> 310 #include <multimedia/drm_framework/native_mediakeysession.h> 311 #include <multimedia/drm_framework/native_drm_err.h> 312 #include <multimedia/drm_framework/native_drm_common.h> 313 ``` 314 315 Linking the Dynamic Libraries in the CMake Script 316 317 ``` cmake 318 target_link_libraries(sample PUBLIC libnative_drm.so) 319 ``` 320 321 <!--RP4-->The following is the sample code:<!--RP4End--> 322 323 ```c++ 324 // Create a media key system based on the media key system information. The following uses com.clearplay.drm as an example. 325 MediaKeySystem *system = nullptr; 326 int32_t ret = OH_MediaKeySystem_Create("com.clearplay.drm", &system); 327 if (system == nullptr) { 328 printf("create media key system failed"); 329 return; 330 } 331 332 // Create a decryption session. If a secure video channel is used, create a MediaKeySession with the content protection level of CONTENT_PROTECTION_LEVEL_HW_CRYPTO or higher. 333 // To use a non-secure video channel, create a MediaKeySession with the content protection level of CONTENT_PROTECTION_LEVEL_SW_CRYPTO or higher. 334 MediaKeySession *session = nullptr; 335 DRM_ContentProtectionLevel contentProtectionLevel = CONTENT_PROTECTION_LEVEL_SW_CRYPTO; 336 ret = OH_MediaKeySystem_CreateMediaKeySession(system, &contentProtectionLevel, &session); 337 if (ret != DRM_OK) { 338 // If the creation fails, refer to the DRM interface document and check logs. 339 printf("create media key session failed."); 340 return; 341 } 342 if (session == nullptr) { 343 printf("media key session is nullptr."); 344 return; 345 } 346 347 // Generate a media key request and set the response to the media key request. 348 349 // Set the decryption configuration, that is, set the decryption session and secure video channel flag to the decoder. 350 // If the DRM scheme supports a secure video channel, set secureVideoPath to true and create a secure decoder before using the channel. 351 // That is, in step 2, call OH_VideoDecoder_CreateByName, with a decoder name followed by .secure (for example, [CodecName].secure) passed in, to create a secure decoder. 352 bool secureVideoPath = false; 353 ret = OH_VideoDecoder_SetDecryptionConfig(videoDec, session, secureVideoPath); 354 ``` 355 3565. Call **OH_VideoDecoder_Configure()** to configure the decoder. 357 358 For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs). 359 360 For details about the parameter verification rules, see [OH_VideoDecoder_Configure()](../../reference/apis-avcodec-kit/_video_decoder.md#oh_videodecoder_configure). 361 362 The parameter value ranges can be obtained through the capability query interface. For details, see [Obtaining Supported Codecs](obtain-supported-codecs.md). 363 364 Currently, the following options must be configured for all supported formats: video frame width, video frame height, and video pixel format. 365 366 ```c++ 367 368 OH_AVFormat *format = OH_AVFormat_Create(); 369 // Set the format. 370 OH_AVFormat_SetIntValue (format, OH_MD_KEY_WIDTH, width); // Mandatory. 371 OH_AVFormat_SetIntValue(format, OH_MD_KEY_HEIGHT, height); // Mandatory. 372 OH_AVFormat_SetIntValue(format, OH_MD_KEY_PIXEL_FORMAT, pixelFormat); 373 // (Optional) Configure low-latency decoding. 374 OH_AVFormat_SetIntValue(format, OH_MD_KEY_VIDEO_ENABLE_LOW_LATENCY, 1); 375 // Configure the decoder. 376 int32_t ret = OH_VideoDecoder_Configure(videoDec, format); 377 if (ret != AV_ERR_OK) { 378 // Handle exceptions. 379 } 380 OH_AVFormat_Destroy(format); 381 ``` 382 3836. Set the surface. 384 385 You can obtain the native window in either of the following ways: 386 - If the image is directly displayed after being decoded, obtain the native window from the **XComponent**. For details about the operation, see [XComponent](../../reference/apis-arkui/arkui-ts/ts-basic-components-xcomponent.md). 387 - If OpenGL post-processing is performed after decoding, obtain the native window from NativeImage. For details about the operation, see [NativeImage](../../graphics/native-image-guidelines.md). 388 389 You perform this step during decoding, that is, dynamically switch the surface. 390 391 ```c++ 392 // Set the window parameters. 393 int32_t ret = OH_VideoDecoder_SetSurface(videoDec, nativeWindow); // Obtain the native window from the XComponent. 394 if (ret != AV_ERR_OK) { 395 // Handle exceptions. 396 } 397 // Configure the matching mode between the video and screen. (Scale the buffer at the original aspect ratio to enable the smaller side of the buffer to match the window, while making the excess part transparent.) 398 OH_NativeWindow_NativeWindowSetScalingModeV2(nativeWindow, OH_SCALING_MODE_SCALE_CROP_V2); 399 ``` 400 4017. (Optional) Call **OH_VideoDecoder_SetParameter()** to set the surface parameters of the decoder. 402 For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs). 403 404 ```c++ 405 OH_AVFormat *format = OH_AVFormat_Create(); 406 // Configure the display rotation angle. 407 OH_AVFormat_SetIntValue(format, OH_MD_KEY_ROTATION, 90); 408 int32_t ret = OH_VideoDecoder_SetParameter(videoDec, format); 409 OH_AVFormat_Destroy(format); 410 ``` 411 4128. Call **OH_VideoDecoder_Prepare()** to prepare internal resources for the decoder. 413 414 ```c++ 415 ret = OH_VideoDecoder_Prepare(videoDec); 416 if (ret != AV_ERR_OK) { 417 // Handle exceptions. 418 } 419 ``` 420 4219. Call **OH_VideoDecoder_Start()** to start the decoder. 422 423 ```c++ 424 // Start the decoder. 425 int32_t ret = OH_VideoDecoder_Start(videoDec); 426 if (ret != AV_ERR_OK) { 427 // Handle exceptions. 428 } 429 ``` 430 43110. (Optional) Call **OH_AVCencInfo_SetAVBuffer()** to set the Common Encryption Scheme (CENC) information. 432 433 If the program to play is DRM encrypted and the application implements media demuxing instead of using the system's [demuxer](audio-video-demuxer.md), you must call **OH_AVCencInfo_SetAVBuffer()** to set the CENC information to the AVBuffer. In this way, the AVBuffer carries the data to be decrypted and CENC information, so that the media data in the AVBuffer can be decrypted. You do not need to call this API when the application uses the system's [demuxer](audio-video-demuxer.md). 434 435 Add the header files. 436 437 ```c++ 438 #include <multimedia/player_framework/native_cencinfo.h> 439 ``` 440 441 Link the dynamic library in the CMake script. 442 443 ``` cmake 444 target_link_libraries(sample PUBLIC libnative_media_avcencinfo.so) 445 ``` 446 447 In the code snippet below, the following variable is used: 448 - **buffer**: parameter passed by the callback function **OnNeedInputBuffer**. 449 ```c++ 450 uint32_t keyIdLen = DRM_KEY_ID_SIZE; 451 uint8_t keyId[] = { 452 0xd4, 0xb2, 0x01, 0xe4, 0x61, 0xc8, 0x98, 0x96, 453 0xcf, 0x05, 0x22, 0x39, 0x8d, 0x09, 0xe6, 0x28}; 454 uint32_t ivLen = DRM_KEY_IV_SIZE; 455 uint8_t iv[] = { 456 0xbf, 0x77, 0xed, 0x51, 0x81, 0xde, 0x36, 0x3e, 457 0x52, 0xf7, 0x20, 0x4f, 0x72, 0x14, 0xa3, 0x95}; 458 uint32_t encryptedBlockCount = 0; 459 uint32_t skippedBlockCount = 0; 460 uint32_t firstEncryptedOffset = 0; 461 uint32_t subsampleCount = 1; 462 DrmSubsample subsamples[1] = { {0x10, 0x16} }; 463 // Create a CencInfo instance. 464 OH_AVCencInfo *cencInfo = OH_AVCencInfo_Create(); 465 if (cencInfo == nullptr) { 466 // Handle exceptions. 467 } 468 // Set the decryption algorithm. 469 OH_AVErrCode errNo = OH_AVCencInfo_SetAlgorithm(cencInfo, DRM_ALG_CENC_AES_CTR); 470 if (errNo != AV_ERR_OK) { 471 // Handle exceptions. 472 } 473 // Set KeyId and Iv. 474 errNo = OH_AVCencInfo_SetKeyIdAndIv(cencInfo, keyId, keyIdLen, iv, ivLen); 475 if (errNo != AV_ERR_OK) { 476 // Handle exceptions. 477 } 478 // Set the sample information. 479 errNo = OH_AVCencInfo_SetSubsampleInfo(cencInfo, encryptedBlockCount, skippedBlockCount, firstEncryptedOffset, 480 subsampleCount, subsamples); 481 if (errNo != AV_ERR_OK) { 482 // Handle exceptions. 483 } 484 // Set the mode. KeyId, Iv, and SubSamples have been set. 485 errNo = OH_AVCencInfo_SetMode(cencInfo, DRM_CENC_INFO_KEY_IV_SUBSAMPLES_SET); 486 if (errNo != AV_ERR_OK) { 487 // Handle exceptions. 488 } 489 // Set CencInfo to the AVBuffer. 490 errNo = OH_AVCencInfo_SetAVBuffer(cencInfo, buffer); 491 if (errNo != AV_ERR_OK) { 492 // Handle exceptions. 493 } 494 // Destroy the CencInfo instance. 495 errNo = OH_AVCencInfo_Destroy(cencInfo); 496 if (errNo != AV_ERR_OK) { 497 // Handle exceptions. 498 } 499 ``` 500 50111. Call **OH_VideoDecoder_PushInputBuffer()** to push the stream to the input buffer for decoding. 502 503 In the code snippet below, the following variables are used: 504 505 - **buffer**: parameter passed by the callback function **OnNeedInputBuffer**. You can obtain the virtual address of the input stream by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr). 506 - **index**: parameter passed by the callback function **OnNeedInputBuffer**, which uniquely corresponds to the buffer. 507 - **size**, **offset**, **pts**, and **frameData**: size, offset, timestamp, and frame data. For details about how to obtain such information, see step 9 in [Media Data Demuxing](./audio-video-demuxer.md). 508 - **flags**: type of the buffer flag. For details, see [OH_AVCodecBufferFlags](../../reference/apis-avcodec-kit/_core.md#oh_avcodecbufferflags). 509 510 ```c++ 511 std::shared_ptr<CodecBufferInfo> bufferInfo = inQueue.Dequeue(); 512 std::shared_lock<std::shared_mutex> lock(codecMutex); 513 if (bufferInfo == nullptr || !bufferInfo->isValid) { 514 // Handle exceptions. 515 } 516 // Write stream data. 517 uint8_t *addr = OH_AVBuffer_GetAddr(bufferInfo->buffer); 518 int32_t capcacity = OH_AVBuffer_GetCapacity(bufferInfo->buffer); 519 if (size > capcacity) { 520 // Handle exceptions. 521 } 522 memcpy(addr, frameData, size); 523 // Configure the size, offset, and timestamp of the frame data. 524 OH_AVCodecBufferAttr info; 525 info.size = size; 526 info.offset = offset; 527 info.pts = pts; 528 info.flags = flags; 529 // Write the information to the buffer. 530 int32_t ret = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info); 531 if (ret != AV_ERR_OK) { 532 // Handle exceptions. 533 } 534 // Send the data to the input buffer for decoding. index is the index of the buffer. 535 ret = OH_VideoDecoder_PushInputBuffer(videoDec, bufferInfo->index); 536 if (ret != AV_ERR_OK) { 537 // Handle exceptions. 538 } 539 ``` 540 54112. Call **OH_VideoDecoder_RenderOutputBuffer()** or **OH_VideoDecoder_RenderOutputBufferAtTime()** to render the data and free the output buffer, or call **OH_VideoDecoder_FreeOutputBuffer()** to directly free the output buffer. 542 543 In the code snippet below, the following variables are used: 544 545 - **index**: parameter passed by the callback function **OnNewOutputBuffer**, which uniquely corresponds to the buffer. 546 - **buffer**: parameter passed by the callback function **OnNewOutputBuffer**. In surface mode, you cannot obtain the virtual address of the image by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr). 547 548 ```c++ 549 std::shared_ptr<CodecBufferInfo> bufferInfo = outQueue.Dequeue(); 550 std::shared_lock<std::shared_mutex> lock(codecMutex); 551 if (bufferInfo == nullptr || !bufferInfo->isValid) { 552 // Handle exceptions. 553 } 554 // Obtain the decoded information. 555 OH_AVCodecBufferAttr info; 556 int32_t ret = OH_AVBuffer_GetBufferAttr(bufferInfo->buffer, &info); 557 if (ret != AV_ERR_OK) { 558 // Handle exceptions. 559 } 560 // You can determine the value. 561 bool isRender; 562 bool isNeedRenderAtTime; 563 if (isRender) { 564 // Render the data and free the output buffer. index is the index of the buffer. 565 if (isNeedRenderAtTime){ 566 // Obtain the system absolute time, and call renderTimestamp to display the time based on service requirements. 567 int64_t renderTimestamp = 568 std::chrono::duration_cast<std::chrono::nanoseconds>(std::chrono::high_resolution_clock::now().time_since_epoch()).count(); 569 ret = OH_VideoDecoder_RenderOutputBufferAtTime(videoDec, bufferInfo->index, renderTimestamp); 570 } else { 571 ret = OH_VideoDecoder_RenderOutputBuffer(videoDec, bufferInfo->index); 572 } 573 574 } else { 575 // Free the output buffer. 576 ret = OH_VideoDecoder_FreeOutputBuffer(videoDec, bufferInfo->index); 577 } 578 if (ret != AV_ERR_OK) { 579 // Handle exceptions. 580 } 581 582 ``` 583 584 > **NOTE** 585 > 586 > To obtain the buffer attributes, such as **pixel_format** and **stride**, call [OH_NativeWindow_NativeWindowHandleOpt](../../reference/apis-arkgraphics2d/_native_window.md#oh_nativewindow_nativewindowhandleopt). 587 58813. (Optional) Call **OH_VideoDecoder_Flush()** to refresh the decoder. 589 590 After **OH_VideoDecoder_Flush** is called, the decoder remains in the Running state, but the input and output data and parameter set (such as the H.264 PPS/SPS) buffered in the decoder are cleared. 591 592 To continue decoding, you must call **OH_VideoDecoder_Start** again. 593 594 In the code snippet below, the following variables are used: 595 596 - **xpsData** and **xpsSize**: PPS/SPS information. For details about how to obtain such information, see [Media Data Demuxing](./audio-video-demuxer.md). 597 598 ```c++ 599 std::unique_lock<std::shared_mutex> lock(codecMutex); 600 // Refresh the decoder. 601 int32_t ret = OH_VideoDecoder_Flush(videoDec); 602 if (ret != AV_ERR_OK) { 603 // Handle exceptions. 604 } 605 inQueue.Flush(); 606 outQueue.Flush(); 607 // Start decoding again. 608 ret = OH_VideoDecoder_Start(videoDec); 609 if (ret != AV_ERR_OK) { 610 // Handle exceptions. 611 } 612 613 std::shared_ptr<CodecBufferInfo> bufferInfo = outQueue.Dequeue(); 614 if (bufferInfo == nullptr || !bufferInfo->isValid) { 615 // Handle exceptions. 616 } 617 // Retransfer PPS/SPS. 618 // Configure the frame PPS/SPS information. 619 uint8_t *addr = OH_AVBuffer_GetAddr(bufferInfo->buffer); 620 int32_t capcacity = OH_AVBuffer_GetCapacity(bufferInfo->buffer); 621 if (xpsSize > capcacity) { 622 // Handle exceptions. 623 } 624 memcpy(addr, xpsData, xpsSize); 625 OH_AVCodecBufferAttr info; 626 info.flags = AVCODEC_BUFFER_FLAG_CODEC_DATA; 627 // Write the information to the buffer. 628 ret = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info); 629 if (ret != AV_ERR_OK) { 630 // Handle exceptions. 631 } 632 // Push the frame data to the decoder. index is the index of the corresponding queue. 633 ret = OH_VideoDecoder_PushInputBuffer(videoDec, bufferInfo->index); 634 if (ret != AV_ERR_OK) { 635 // Handle exceptions. 636 } 637 638 ``` 639 640 > **NOTE** 641 > 642 > When **OH_VideoDecoder_Start** s called again after the flush operation, the PPS/SPS must be retransferred. 643 644 64514. (Optional) Call **OH_VideoDecoder_Reset()** to reset the decoder. 646 647 After **OH_VideoDecoder_Reset** is called, the decoder returns to the Initialized state. To continue decoding, you must call **OH_VideoDecoder_Configure**, **OH_VideoDecoder_SetSurface**, and **OH_VideoDecoder_Prepare** in sequence. 648 649 ```c++ 650 std::unique_lock<std::shared_mutex> lock(codecMutex); 651 // Reset the decoder. 652 int32_t ret = OH_VideoDecoder_Reset(videoDec); 653 if (ret != AV_ERR_OK) { 654 // Handle exceptions. 655 } 656 inQueue.Flush(); 657 outQueue.Flush(); 658 // Reconfigure the decoder. 659 ret = OH_VideoDecoder_Configure(videoDec, format); 660 if (ret != AV_ERR_OK) { 661 // Handle exceptions. 662 } 663 // Reconfigure the surface in surface mode. This is not required in buffer mode. 664 ret = OH_VideoDecoder_SetSurface(videoDec, nativeWindow); 665 if (ret != AV_ERR_OK) { 666 // Handle exceptions. 667 } 668 // The decoder is ready again. 669 ret = OH_VideoDecoder_Prepare(videoDec); 670 if (ret != AV_ERR_OK) { 671 // Handle exceptions. 672 } 673 ``` 674 67515. (Optional) Call **OH_VideoDecoder_Stop()** to stop the decoder. 676 677 After **OH_VideoDecoder_Stop()** is called, the decoder retains the decoding instance and releases the input and output buffers. You can directly call **OH_VideoDecoder_Start** to continue decoding. The first input buffer must carry the parameter set, starting from the IDR frame. 678 679 ```c++ 680 std::unique_lock<std::shared_mutex> lock(codecMutex); 681 // Stop the decoder. 682 int32_t ret = OH_VideoDecoder_Stop(videoDec); 683 if (ret != AV_ERR_OK) { 684 // Handle exceptions. 685 } 686 inQueue.Flush(); 687 outQueue.Flush(); 688 ``` 689 69016. Call **OH_VideoDecoder_Destroy()** to destroy the decoder instance and release resources. 691 692 > **NOTE** 693 > 694 > This API cannot be called in the callback function. 695 > 696 > After the call, you must set a null pointer to the decoder to prevent program errors caused by wild pointers. 697 698 ```c++ 699 std::unique_lock<std::shared_mutex> lock(codecMutex); 700 // Call OH_VideoDecoder_Destroy to destroy the decoder. 701 int32_t ret = AV_ERR_OK; 702 if (videoDec != nullptr) { 703 ret = OH_VideoDecoder_Destroy(videoDec); 704 videoDec = nullptr; 705 } 706 if (ret != AV_ERR_OK) { 707 // Handle exceptions. 708 } 709 inQueue.Flush(); 710 outQueue.Flush(); 711 ``` 712 713### Buffer Output 714 715The following walks you through how to implement the entire video decoding process in buffer mode. In this example, an H.264 file is input and decoded into a YUV file. 716Currently, the VideoDecoder module supports only data rotation in asynchronous mode. 717 7181. Add the header files. 719 720 ```c++ 721 #include <multimedia/player_framework/native_avcodec_videodecoder.h> 722 #include <multimedia/player_framework/native_avcapability.h> 723 #include <multimedia/player_framework/native_avcodec_base.h> 724 #include <multimedia/player_framework/native_avformat.h> 725 #include <multimedia/player_framework/native_avbuffer.h> 726 #include <native_buffer/native_buffer.h> 727 #include <fstream> 728 ``` 729 7302. Create a decoder instance. 731 732 The procedure is the same as that in surface mode and is not described here. 733 734 ```c++ 735 // To create a decoder by name, call OH_AVCapability_GetName to obtain the codec names available and then call OH_VideoDecoder_CreateByName. If your application has special requirements, for example, expecting a decoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first. 736 OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false); 737 const char *name = OH_AVCapability_GetName(capability); 738 OH_AVCodec *videoDec = OH_VideoDecoder_CreateByName(name); 739 ``` 740 741 ```c++ 742 // Create a decoder by MIME type. Only specific codecs recommended by the system can be created in this way. 743 // If multiple codecs need to be created, create hardware decoder instances first. If the hardware resources are insufficient, create software decoder instances. 744 // Create an H.264 decoder for software/hardware decoding. 745 OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC); 746 // Create an H.265 decoder for hardware decoding. 747 OH_AVCodec *videoDec = OH_VideoDecoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_HEVC); 748 ``` 749 7503. Call **OH_VideoDecoder_RegisterCallback()** to register the callback functions. 751 752 Register the **OH_AVCodecCallback** struct that defines the following callback function pointers: 753 754 - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror). 755 - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, stream width or height change. 756 - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required, which means that the decoder is ready for receiving data. 757 - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that decoding is complete. 758 759 You need to process the callback functions to ensure that the decoder runs properly. 760 761 <!--RP2--><!--RP2End--> 762 763 ```c++ 764 int32_t cropTop = 0; 765 int32_t cropBottom = 0; 766 int32_t cropLeft = 0; 767 int32_t cropRight = 0; 768 bool isFirstFrame = true; 769 // Implement the OH_AVCodecOnError callback function. 770 static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData) 771 { 772 // Process the error code in the callback. 773 (void)codec; 774 (void)errorCode; 775 (void)userData; 776 } 777 778 // Implement the OH_AVCodecOnStreamChanged callback function. 779 static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData) 780 { 781 // Optional. Configure the data when you want to obtain the video width, height, and stride. 782 // The changed video width, height, and stride can be obtained through format. 783 (void)codec; 784 (void)userData; 785 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_WIDTH, &width); 786 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_HEIGHT, &height); 787 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_STRIDE, &widthStride); 788 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_SLICE_HEIGHT, &heightStride); 789 // (Optional) Obtain the cropped rectangle information. 790 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_TOP, &cropTop); 791 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_BOTTOM, &cropBottom); 792 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_LEFT, &cropLeft); 793 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_RIGHT, &cropRight); 794 } 795 796 // Implement the OH_AVCodecOnNeedInputBuffer callback function. 797 static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData) 798 { 799 // The data buffer of the input frame and its index are sent to inQueue. 800 (void)codec; 801 (void)userData; 802 inQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer)); 803 } 804 805 // Implement the OH_AVCodecOnNewOutputBuffer callback function. 806 static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData) 807 { 808 // Optional. Configure the data when you want to obtain the video width, height, and stride. 809 // Obtain the video width, height, and stride. 810 if (isFirstFrame) { 811 OH_AVFormat *format = OH_VideoDecoder_GetOutputDescription(codec); 812 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_WIDTH, &width); 813 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_PIC_HEIGHT, &height); 814 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_STRIDE, &widthStride); 815 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_SLICE_HEIGHT, &heightStride); 816 // (Optional) Obtain the cropped rectangle information. 817 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_TOP, &cropTop); 818 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_BOTTOM, &cropBottom); 819 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_LEFT, &cropLeft); 820 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_CROP_RIGHT, &cropRight); 821 OH_AVFormat_Destroy(format); 822 isFirstFrame = false; 823 } 824 // The data buffer of the finished frame and its index are sent to outQueue. 825 (void)userData; 826 outQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer)); 827 } 828 // Call OH_VideoDecoder_RegisterCallback() to register the callback functions. 829 OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer}; 830 // Set the asynchronous callbacks. 831 int32_t ret = OH_VideoDecoder_RegisterCallback(videoDec, cb, nullptr); // nullptr: userData is null. 832 if (ret != AV_ERR_OK) { 833 // Handle exceptions. 834 } 835 ``` 836 837 > **NOTE** 838 > 839 > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue. 840 > 841 8424. (Optional) Call **OH_VideoDecoder_SetDecryptionConfig** to set the decryption configuration. Call this API after the media key system information is obtained and a media key is obtained but before **Prepare()** is called. For details about how to obtain such information, see step 4 in [Media Data Demuxing](audio-video-demuxer.md). In buffer mode, the DRM decryption capability supports only non-secure video channels. For details about DRM APIs, see [DRM](../../reference/apis-drm-kit/_drm.md). 843 844 Add the header files. 845 846 ```c++ 847 #include <multimedia/drm_framework/native_mediakeysystem.h> 848 #include <multimedia/drm_framework/native_mediakeysession.h> 849 #include <multimedia/drm_framework/native_drm_err.h> 850 #include <multimedia/drm_framework/native_drm_common.h> 851 ``` 852 853 Link the dynamic library in the CMake script. 854 855 ``` cmake 856 target_link_libraries(sample PUBLIC libnative_drm.so) 857 ``` 858 859 The following is the sample code: 860 861 ```c++ 862 // Create a media key system based on the media key system information. The following uses com.clearplay.drm as an example. 863 MediaKeySystem *system = nullptr; 864 int32_t ret = OH_MediaKeySystem_Create("com.clearplay.drm", &system); 865 if (system == nullptr) { 866 printf("create media key system failed"); 867 return; 868 } 869 870 // Create a media key session. 871 // To use a non-secure video channel, create a MediaKeySession with the content protection level of CONTENT_PROTECTION_LEVEL_SW_CRYPTO or higher. 872 MediaKeySession *session = nullptr; 873 DRM_ContentProtectionLevel contentProtectionLevel = CONTENT_PROTECTION_LEVEL_SW_CRYPTO; 874 ret = OH_MediaKeySystem_CreateMediaKeySession(system, &contentProtectionLevel, &session); 875 if (ret != DRM_OK) { 876 // If the creation fails, refer to the DRM interface document and check logs. 877 printf("create media key session failed."); 878 return; 879 } 880 if (session == nullptr) { 881 printf("media key session is nullptr."); 882 return; 883 } 884 // Generate a media key request and set the response to the media key request. 885 // Set the decryption configuration, that is, set the decryption session and secure video channel flag to the decoder. 886 bool secureVideoPath = false; 887 ret = OH_VideoDecoder_SetDecryptionConfig(videoDec, session, secureVideoPath); 888 ``` 889 8905. Call **OH_VideoDecoder_Configure()** to configure the decoder. 891 892 The procedure is the same as that in surface mode and is not described here. 893 894 ```c++ 895 OH_AVFormat *format = OH_AVFormat_Create(); 896 // Set the format. 897 OH_AVFormat_SetIntValue (format, OH_MD_KEY_WIDTH, width); // Mandatory. 898 OH_AVFormat_SetIntValue(format, OH_MD_KEY_HEIGHT, height); // Mandatory. 899 OH_AVFormat_SetIntValue(format, OH_MD_KEY_PIXEL_FORMAT, pixelFormat); 900 // Configure the decoder. 901 int32_t ret = OH_VideoDecoder_Configure(videoDec, format); 902 if (ret != AV_ERR_OK) { 903 // Handle exceptions. 904 } 905 OH_AVFormat_Destroy(format); 906 ``` 907 9086. Call **OH_VideoDecoder_Prepare()** to prepare internal resources for the decoder. 909 910 ```c++ 911 int32_t ret = OH_VideoDecoder_Prepare(videoDec); 912 if (ret != AV_ERR_OK) { 913 // Handle exceptions. 914 } 915 ``` 916 9177. Call **OH_VideoDecoder_Start()** to start the decoder. 918 919 ```c++ 920 std::unique_ptr<std::ofstream> outputFile = std::make_unique<std::ofstream>(); 921 outputFile->open("/*yourpath*.yuv", std::ios::out | std::ios::binary | std::ios::ate); 922 // Start the decoder. 923 int32_t ret = OH_VideoDecoder_Start(videoDec); 924 if (ret != AV_ERR_OK) { 925 // Handle exceptions. 926 } 927 ``` 928 9298. (Optional) Call **OH_AVCencInfo_SetAVBuffer()** to set the CENC information. 930 931 The procedure is the same as that in surface mode and is not described here. 932 933 The following is the sample code: 934 935 ```c++ 936 uint32_t keyIdLen = DRM_KEY_ID_SIZE; 937 uint8_t keyId[] = { 938 0xd4, 0xb2, 0x01, 0xe4, 0x61, 0xc8, 0x98, 0x96, 939 0xcf, 0x05, 0x22, 0x39, 0x8d, 0x09, 0xe6, 0x28}; 940 uint32_t ivLen = DRM_KEY_IV_SIZE; 941 uint8_t iv[] = { 942 0xbf, 0x77, 0xed, 0x51, 0x81, 0xde, 0x36, 0x3e, 943 0x52, 0xf7, 0x20, 0x4f, 0x72, 0x14, 0xa3, 0x95}; 944 uint32_t encryptedBlockCount = 0; 945 uint32_t skippedBlockCount = 0; 946 uint32_t firstEncryptedOffset = 0; 947 uint32_t subsampleCount = 1; 948 DrmSubsample subsamples[1] = { {0x10, 0x16} }; 949 // Create a CencInfo instance. 950 OH_AVCencInfo *cencInfo = OH_AVCencInfo_Create(); 951 if (cencInfo == nullptr) { 952 // Handle exceptions. 953 } 954 // Set the decryption algorithm. 955 OH_AVErrCode errNo = OH_AVCencInfo_SetAlgorithm(cencInfo, DRM_ALG_CENC_AES_CTR); 956 if (errNo != AV_ERR_OK) { 957 // Handle exceptions. 958 } 959 // Set KeyId and Iv. 960 errNo = OH_AVCencInfo_SetKeyIdAndIv(cencInfo, keyId, keyIdLen, iv, ivLen); 961 if (errNo != AV_ERR_OK) { 962 // Handle exceptions. 963 } 964 // Set the sample information. 965 errNo = OH_AVCencInfo_SetSubsampleInfo(cencInfo, encryptedBlockCount, skippedBlockCount, firstEncryptedOffset, 966 subsampleCount, subsamples); 967 if (errNo != AV_ERR_OK) { 968 // Handle exceptions. 969 } 970 // Set the mode. KeyId, Iv, and SubSamples have been set. 971 errNo = OH_AVCencInfo_SetMode(cencInfo, DRM_CENC_INFO_KEY_IV_SUBSAMPLES_SET); 972 if (errNo != AV_ERR_OK) { 973 // Handle exceptions. 974 } 975 // Set CencInfo to the AVBuffer. 976 errNo = OH_AVCencInfo_SetAVBuffer(cencInfo, buffer); 977 if (errNo != AV_ERR_OK) { 978 // Handle exceptions. 979 } 980 // Destroy the CencInfo instance. 981 errNo = OH_AVCencInfo_Destroy(cencInfo); 982 if (errNo != AV_ERR_OK) { 983 // Handle exceptions. 984 } 985 ``` 986 9879. Call **OH_VideoDecoder_PushInputBuffer()** to push the stream to the input buffer for decoding. 988 989 The procedure is the same as that in surface mode and is not described here. 990 991 ```c++ 992 std::shared_ptr<CodecBufferInfo> bufferInfo = inQueue.Dequeue(); 993 std::shared_lock<std::shared_mutex> lock(codecMutex); 994 if (bufferInfo == nullptr || !bufferInfo->isValid) { 995 // Handle exceptions. 996 } 997 // Write stream data. 998 uint8_t *addr = OH_AVBuffer_GetAddr(bufferInfo->buffer); 999 int32_t capcacity = OH_AVBuffer_GetCapacity(bufferInfo->buffer); 1000 if (size > capcacity) { 1001 // Handle exceptions. 1002 } 1003 memcpy(addr, frameData, size); 1004 // Configure the size, offset, and timestamp of the frame data. 1005 OH_AVCodecBufferAttr info; 1006 info.size = size; 1007 info.offset = offset; 1008 info.pts = pts; 1009 info.flags = flags; 1010 // Write the information to the buffer. 1011 ret = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info); 1012 if (ret != AV_ERR_OK) { 1013 // Handle exceptions. 1014 } 1015 // Send the data to the input buffer for decoding. index is the index of the buffer. 1016 int32_t ret = OH_VideoDecoder_PushInputBuffer(videoDec, bufferInfo->index); 1017 if (ret != AV_ERR_OK) { 1018 // Handle exceptions. 1019 } 1020 ``` 1021 102210. Call **OH_VideoDecoder_FreeOutputBuffer()** to release decoded frames. 1023 1024 In the code snippet below, the following variables are used: 1025 1026 - **index**: parameter passed by the callback function **OnNewOutputBuffer**, which uniquely corresponds to the buffer. 1027 - **buffer**: parameter passed by the callback function **OnNewOutputBuffer**. You can obtain the virtual address of an image by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr). 1028 1029 ```c++ 1030 std::shared_ptr<CodecBufferInfo> bufferInfo = outQueue.Dequeue(); 1031 std::shared_lock<std::shared_mutex> lock(codecMutex); 1032 if (bufferInfo == nullptr || !bufferInfo->isValid) { 1033 // Handle exceptions. 1034 } 1035 // Obtain the decoded information. 1036 OH_AVCodecBufferAttr info; 1037 int32_t ret = OH_AVBuffer_GetBufferAttr(bufferInfo->buffer, &info); 1038 if (ret != AV_ERR_OK) { 1039 // Handle exceptions. 1040 } 1041 // Write the decoded data (specified by data) to the output file. 1042 outputFile->write(reinterpret_cast<char *>(OH_AVBuffer_GetAddr(bufferInfo->buffer)), info.size); 1043 // Free the buffer that stores the output data. index is the index of the buffer. 1044 ret = OH_VideoDecoder_FreeOutputBuffer(videoDec, bufferInfo->index); 1045 if (ret != AV_ERR_OK) { 1046 // Handle exceptions. 1047 } 1048 ``` 1049 1050 To copy the Y, U, and V components of an NV12 or NV21 image to another buffer in sequence, perform the following steps (taking an NV12 image as an example), 1051 1052 presenting the image layout of **width**, **height**, **wStride**, and **hStride**. 1053 1054 - **OH_MD_KEY_VIDEO_PIC_WIDTH** corresponds to **width**. 1055 - **OH_MD_KEY_VIDEO_PIC_HEIGHT** corresponds to **height**. 1056 - **OH_MD_KEY_VIDEO_STRIDE** corresponds to **wStride**. 1057 - **OH_MD_KEY_VIDEO_SLICE_HEIGHT** corresponds to **hStride**. 1058 1059  1060 1061 Add the header files. 1062 1063 ```c++ 1064 #include <string.h> 1065 ``` 1066 1067 The following is the sample code: 1068 1069 ```c++ 1070 // Obtain the width and height of the source buffer by using the callback function OnStreamChanged or OH_VideoDecoder_GetOutputDescription. 1071 struct Rect 1072 { 1073 int32_t width; 1074 int32_t height; 1075 }; 1076 1077 struct DstRect // Width, height, and stride of the destination buffer. They are set by the caller. 1078 { 1079 int32_t wStride; 1080 int32_t hStride; 1081 }; 1082 // Obtain the width, height, and stride of the source buffer by using the callback function OnStreamChanged or OH_VideoDecoder_GetOutputDescription. 1083 struct SrcRect 1084 { 1085 int32_t wStride; 1086 int32_t hStride; 1087 }; 1088 1089 Rect rect = {320, 240}; 1090 DstRect dstRect = {320, 240}; 1091 SrcRect srcRect = {320, 256}; 1092 uint8_t* dst = new uint8_t[dstRect.hStride * dstRect.wStride * 3 / 2]; // Pointer to the target memory area. 1093 uint8_t* src = new uint8_t[srcRect.hStride * srcRect.wStride * 3 / 2]; // Pointer to the source memory area. 1094 uint8_t* dstTemp = dst; 1095 uint8_t* srcTemp = src; 1096 1097 // Y: Copy the source data in the Y region to the target data in another region. 1098 for (int32_t i = 0; i < rect.height; ++i) { 1099 // Copy a row of data from the source to a row of the target. 1100 memcpy(dstTemp, srcTemp, rect.width); 1101 // Update the pointers to the source data and target data to copy the next row. The pointers to the source data and target data are moved downwards by one wStride each time the source data and target data are updated. 1102 dstTemp += dstRect.wStride; 1103 srcTemp += srcRect.wStride; 1104 } 1105 // Padding. 1106 // Update the pointers to the source data and target data. The pointers move downwards by one padding. 1107 dstTemp += (dstRect.hStride - rect.height) * dstRect.wStride; 1108 srcTemp += (srcRect.hStride - rect.height) * srcRect.wStride; 1109 rect.height >>= 1; 1110 // UV: Copy the source data in the UV region to the target data in another region. 1111 for (int32_t i = 0; i < rect.height; ++i) { 1112 memcpy(dstTemp, srcTemp, rect.width); 1113 dstTemp += dstRect.wStride; 1114 srcTemp += srcRect.wStride; 1115 } 1116 1117 delete[] dst; 1118 dst = nullptr; 1119 delete[] src; 1120 src = nullptr; 1121 ``` 1122 1123 When processing buffer data (before releasing data) during hardware decoding, the output callback AVBuffer receives the image data after width and height alignment. Generally, copy the image width, height, stride, and pixel format to ensure correct processing of the decoded data. For details, see step 3 in [Buffer Output](#buffer-output). 1124 1125The subsequent processes (including refreshing, resetting, stopping, and destroying the decoder) are basically the same as those in surface mode. For details, see steps 13-16 in [Surface Output](#surface-output). 1126 1127<!--RP5--> 1128<!--RP5End--> 1129