1# Video Encoding 2 3You can call the native APIs provided by the VideoEncoder module to encode a video, that is, to compress video data into video streams. 4 5<!--RP3--><!--RP3End--> 6 7Currently, the following encoding capabilities are supported: 8 9| Container Format| Video Encoding Type | 10| -------- | ---------------------------- | 11| mp4 | HEVC (H.265) and AVC (H.264)| 12 13Only hardware encoding is supported. When an encoder is created based on the MIME type, H.264 (OH_AVCODEC_MIMETYPE_VIDEO_AVC) and H.265 (OH_AVCODEC_MIMETYPE_VIDEO_HEVC) are supported. 14 15You can perform a [capability query](obtain-supported-codecs.md) to obtain the encoding capability range. 16 17<!--RP1--><!--RP1End--> 18 19The following table lists the video encoding capabilities supported: 20 21<!--RP4--> 22| Capability | How to Use | 23| --------------------------------------- | ---------------------------------------------------------------------------------- | 24| Layered encoding<br>Setting the LTR frame and reference frame | For details, see [Temporally Scalable Video Coding](video-encoding-temporal-scalability.md). | 25<!--RP4End--> 26 27## Restrictions 28- The buffer mode does not support 10-bit image data. 29- Due to limited hardware encoder resources, you must call **OH_VideoEncoder_Destroy** to destroy every encoder instance when it is no longer needed. 30- Once **Flush**, **Reset**, or **Stop** is called, the system reclaims the OH_AVBuffer. Therefore, do not continue to operate the OH_AVBuffer obtained through the previous callback function. 31- The buffer mode and surface mode use the same APIs. Therefore, the surface mode is described as an example. 32- In buffer mode, after obtaining the pointer to an OH_AVBuffer object through the callback function **OH_AVCodecOnNeedInputBuffer**, call **OH_VideoEncoder_PushInputBuffer** to notify the system that the buffer has been fully utilized. In this way, the system will proceed with encoding the data contained in the buffer. If the OH_NativeBuffer object is obtained through **OH_AVBuffer_GetNativeBuffer** and its lifecycle extends beyond that of the OH_AVBuffer pointer object, you mut perform data duplication. In this case, you should manage the lifecycle of the newly generated OH_NativeBuffer object to ensure that the object can be correctly used and released. 33 34 35## Surface Input and Buffer Input 36 37- Surface input and buffer input differ in data sources. 38 39- They are applicable to different scenarios. 40 - Surface input indicates that the OHNativeWindow is used to transfer passed-in data. It supports connection with other modules, such as the camera module. 41 - Buffer input refers to a pre-allocated memory area. The caller needs to copy original data to this memory area. It is more applicable to scenarios such as reading video data from files. 42 43- The two also differ slightly in the API calling modes: 44 - In buffer mode, the caller calls **OH_VideoEncoder_PushInputBuffer** to input data. In surface mode, the caller, before the encoder is ready, calls **OH_VideoEncoder_GetSurface** to obtain the OHNativeWindow for video data transmission. 45 - In buffer mode, the caller uses **attr** in **OH_AVBuffer** to pass in the End of Stream (EOS) flag, and the encoder stops when it reads the last frame. In surface mode, the caller calls **OH_VideoEncoder_NotifyEndOfStream** to notify the encoder of EOS. 46 47For details about the development procedure, see [Surface Input](#surface-input) and [Buffer Input](#buffer-input). 48 49## State Machine Interaction 50The following figure shows the interaction between states. 51 52 53 54 551. An encoder enters the Initialized state in either of the following ways: 56 - When an encoder instance is initially created, the encoder enters the Initialized state. 57 - When **OH_VideoEncoder_Reset** is called in any state, the encoder returns to the Initialized state. 58 592. When the encoder is in the Initialized state, you can call **OH_VideoEncoder_Configure** to configure the encoder. After the configuration, the encoder enters the Configured state. 603. When the encoder is in the Configured state, you can call **OH_VideoEncoder_Prepare()** to switch it to the Prepared state. 614. When the encoder is in the Prepared state, you can call **OH_VideoEncoder_Start** to switch it to the Executing state. 62 - When the encoder is in the Executing state, you can call **OH_VideoEncoder_Stop** to switch it back to the Prepared state. 63 645. In rare cases, the encoder may encounter an error and enter the Error state. If this is the case, an invalid value can be returned or an exception can be thrown through a queue operation. 65 - When the encoder is in the Error state, you can either call **OH_VideoEncoder_Reset** to switch it to the Initialized state or call **OH_VideoEncoder_Destroy** to switch it to the Released state. 66 676. The Executing state has three substates: Flushed, Running, and End-of-Stream. 68 - After **OH_VideoEncoder_Start** is called, the encoder enters the Running substate immediately. 69 - When the encoder is in the Executing state, you can call **OH_VideoEncoder_Flush** to switch it to the Flushed substate. 70 - After all data to be processed is transferred to the encoder, the **AVCODEC_BUFFER_FLAGS_EOS** flag is added to the last input buffer in the input buffers queue. Once this flag is detected, the encoder transits to the End-of-Stream substate. In this state, the encoder does not accept new inputs, but continues to generate outputs until it reaches the tail frame. 71 727. When the encoder is no longer needed, you must call **OH_VideoEncoder_Destroy** to destroy the encoder instance. Then the encoder enters the Released state. 73 74## How to Develop 75 76Read [VideoEncoder](../../reference/apis-avcodec-kit/_video_encoder.md) for the API reference. 77 78The figure below shows the call relationship of video encoding. 79 80- The dotted line indicates an optional operation. 81 82- The solid line indicates a mandatory operation. 83 84 85 86### Linking the Dynamic Libraries in the CMake Script 87 88```cmake 89target_link_libraries(sample PUBLIC libnative_media_codecbase.so) 90target_link_libraries(sample PUBLIC libnative_media_core.so) 91target_link_libraries(sample PUBLIC libnative_media_venc.so) 92``` 93> **NOTE** 94> 95> The word 'sample' in the preceding code snippet is only an example. Use the actual project directory name. 96> 97 98### Surface Input 99 100The following walks you through how to implement the entire video encoding process in surface mode. In this example, surface data is input and encoded into a H.264 stream. 101 102Currently, the VideoEncoder module supports only data rotation in asynchronous mode. 103 1041. Add the header files. 105 106 ```cpp 107 #include <multimedia/player_framework/native_avcodec_videoencoder.h> 108 #include <multimedia/player_framework/native_avcapability.h> 109 #include <multimedia/player_framework/native_avcodec_base.h> 110 #include <multimedia/player_framework/native_avformat.h> 111 #include <multimedia/player_framework/native_avbuffer.h> 112 #include <fstream> 113 ``` 114 1152. Configure global variables. 116 117 ```c++ 118 // (Mandatory) Configure the video frame width. 119 int32_t width = 320; 120 // (Mandatory) Configure the video frame height. 121 int32_t height = 240; 122 // (Mandatory) Configure the video pixel format. 123 constexpr OH_AVPixelFormat DEFAULT_PIXELFORMAT = AV_PIXEL_FORMAT_NV12; 124 int32_t widthStride = 0; 125 int32_t heightStride = 0; 126 ``` 127 1283. Create an encoder instance. 129 130 You can create an encoder by name or MIME type. In the code snippet below, the following variables are used: 131 132 - **videoEnc**: pointer to the video encoder instance. 133 - **capability**: pointer to the encoder's capability. 134 - **OH_AVCODEC_MIMETYPE_VIDEO_AVC**: AVC video codec. 135 136 The following is an example: 137 138 ```c++ 139 // Create an encoder by name. If your application has special requirements, for example, expecting an encoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first. 140 OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, true); 141 // Create a hardware encoder instance. 142 OH_AVCapability *capability= OH_AVCodec_GetCapabilityByCategory(OH_AVCODEC_MIMETYPE_VIDEO_AVC, false, HARDWARE); 143 const char *codecName = OH_AVCapability_GetName(capability); 144 OH_AVCodec *videoEnc = OH_VideoEncoder_CreateByName(codecName); 145 ``` 146 147 ```c++ 148 // Create an encoder by MIME type. Only specific codecs recommended by the system can be created in this way. 149 // Only hardware encoders can be created. 150 OH_AVCodec *videoEnc = OH_VideoEncoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC); 151 ``` 152 1534. Call **OH_VideoEncoder_RegisterCallback()** to register the callback functions. 154 155 Register the **OH_AVCodecCallback** struct that defines the following callback function pointers: 156 157 - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror). 158 - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, format change. 159 - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required. This callback does not take effect, since you input data through the obtained surface. 160 - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that encoding is complete. 161 162 <!--RP2--><!--RP2End--> 163 164 The following is an example: 165 166 <!--RP5--> 167 ```c++ 168 // Set the OH_AVCodecOnError callback function, which is used to report a codec operation error. 169 static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData) 170 { 171 // Process the error code in the callback. 172 (void)codec; 173 (void)errorCode; 174 (void)userData; 175 } 176 ``` 177 <!--RP5End--> 178 179 <!--RP12--> 180 ```c++ 181 // Set the OH_AVCodecOnStreamChanged callback function, which is used to report an encoding stream change. 182 static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData) 183 { 184 // This callback is useless in encoding scenarios. 185 (void)codec; 186 (void)format; 187 (void)userData; 188 } 189 ``` 190 <!--RP12End--> 191 192 ```c++ 193 // Set the OH_AVCodecOnNeedInputBuffer callback function, which is used to send an input frame to the data queue. 194 static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData) 195 { 196 // In surface mode, this callback function does not take effect. Data is input through the obtained surface. 197 (void)userData; 198 (void)index; 199 (void)buffer; 200 } 201 ``` 202 203 <!--RP6--> 204 ```c++ 205 // Set the OH_AVCodecOnNewOutputBuffer callback function, which is used to send an encoded frame to the output queue. 206 static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData) 207 { 208 // The index of the output frame buffer is sent to outIndexQueue. 209 // The encoded frame data (specified by buffer) is sent to outBufferQueue. 210 // Process the data. 211 // Release the encoded frame. 212 } 213 ``` 214 <!--RP6End--> 215 216 ```c++ 217 // Call OH_VideoEncoder_RegisterCallback() to register the callback functions. 218 OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer}; 219 int32_t ret = OH_VideoEncoder_RegisterCallback(videoEnc, cb, NULL); // NULL: userData is null. 220 if (ret != AV_ERR_OK) { 221 // Exception handling. 222 } 223 ``` 224 225 > **NOTE** 226 > 227 > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue. 228 2295. (Optional) Call **OH_VideoEncoder_RegisterParameterCallback()** to register the frame-specific parameter callback function. 230 231 For details, see [Temporally Scalable Video Coding](video-encoding-temporal-scalability.md). 232 233 <!--RP7--> 234 ```c++ 235 // 5.1 Implement the OH_VideoEncoder_OnNeedInputParameter callback function. 236 static void OnNeedInputParameter(OH_AVCodec *codec, uint32_t index, OH_AVFormat *parameter, void *userData) 237 { 238 // The index of the input frame parameter is sent to InParameterIndexQueue. 239 // The input frame data (specified by parameter) is sent to InParameterQueue. 240 // Process the data. 241 // Write the frame-specific parameter. 242 } 243 244 // 5.2 Register the frame-specific parameter callback function. 245 OH_VideoEncoder_OnNeedInputParameter inParaCb = OnNeedInputParameter; 246 OH_VideoEncoder_RegisterParameterCallback(videoEnc, inParaCb, NULL); // NULL: userData is null. 247 ``` 248 <!--RP7End--> 249 2506. Call **OH_VideoEncoder_Configure()** to configure the encoder. 251 252 For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs). 253 254 For details about the parameter verification rules, see [OH_VideoEncoder_Configure()](../../reference/apis-avcodec-kit/_video_encoder.md#oh_videoencoder_configure). 255 256 The parameter value ranges can be obtained through the capability query interface. For details, see [Obtaining Supported Codecs](obtain-supported-codecs.md). 257 258 Currently, the following options must be configured for all supported formats: video frame width, video frame height, and video pixel format. In the code snippet below, the following variables are used: 259 260 - **DEFAULT_WIDTH**: 320 pixels 261 - **DEFAULT_HEIGHT**: 240 pixels 262 - **DEFAULT_PIXELFORMAT**: **AV_PIXEL_FORMAT_NV12** (the pixel format of the YUV file is NV12) 263 264 ```c++ 265 // Configure the video frame rate. 266 double frameRate = 30.0; 267 // Configure the video YUV range flag. 268 bool rangeFlag = false; 269 // Configure the video primary color. 270 int32_t primary = static_cast<int32_t>(OH_ColorPrimary::COLOR_PRIMARY_BT709); 271 // Configure the transfer characteristics. 272 int32_t transfer = static_cast<int32_t>(OH_TransferCharacteristic::TRANSFER_CHARACTERISTIC_BT709); 273 // Configure the maximum matrix coefficient. 274 int32_t matrix = static_cast<int32_t>(OH_MatrixCoefficient::MATRIX_COEFFICIENT_IDENTITY); 275 // Configure the encoding profile. 276 int32_t profile = static_cast<int32_t>(OH_AVCProfile::AVC_PROFILE_BASELINE); 277 // Configure the encoding bit rate mode. 278 int32_t rateMode = static_cast<int32_t>(OH_VideoEncodeBitrateMode::CBR); 279 // Configure the key frame interval, in milliseconds. 280 int32_t iFrameInterval = 23000; 281 // Configure the bit rate. 282 int64_t bitRate = 3000000; 283 // Set the encoding quality. 284 int64_t quality = 0; 285 286 OH_AVFormat *format = OH_AVFormat_Create(); 287 OH_AVFormat_SetIntValue (format, OH_MD_KEY_WIDTH, width); // Mandatory 288 OH_AVFormat_SetIntValue(format, OH_MD_KEY_HEIGHT, height); // Mandatory 289 OH_AVFormat_SetIntValue(format, OH_MD_KEY_PIXEL_FORMAT, DEFAULT_PIXELFORMAT); // Mandatory 290 291 OH_AVFormat_SetDoubleValue(format, OH_MD_KEY_FRAME_RATE, frameRate); 292 OH_AVFormat_SetIntValue(format, OH_MD_KEY_RANGE_FLAG, rangeFlag); 293 OH_AVFormat_SetIntValue(format, OH_MD_KEY_COLOR_PRIMARIES, primary); 294 OH_AVFormat_SetIntValue(format, OH_MD_KEY_TRANSFER_CHARACTERISTICS, transfer); 295 OH_AVFormat_SetIntValue(format, OH_MD_KEY_MATRIX_COEFFICIENTS, matrix); 296 OH_AVFormat_SetIntValue(format, OH_MD_KEY_I_FRAME_INTERVAL, iFrameInterval); 297 OH_AVFormat_SetIntValue(format, OH_MD_KEY_PROFILE, profile); 298 // Configure OH_MD_KEY_QUALITY only when OH_MD_KEY_BITRATE = CQ is used. 299 if (rateMode == static_cast<int32_t>(OH_VideoEncodeBitrateMode::CQ)) { 300 OH_AVFormat_SetIntValue(format, OH_MD_KEY_QUALITY, quality); 301 } else if (rateMode == static_cast<int32_t>(OH_VideoEncodeBitrateMode::CBR) || 302 rateMode == static_cast<int32_t>(OH_VideoEncodeBitrateMode::VBR)){ 303 OH_AVFormat_SetLongValue(format, OH_MD_KEY_BITRATE, bitRate); 304 } 305 OH_AVFormat_SetIntValue(format, OH_MD_KEY_VIDEO_ENCODE_BITRATE_MODE, rateMode); 306 int32_t ret = OH_VideoEncoder_Configure(videoEnc, format); 307 if (ret != AV_ERR_OK) { 308 // Exception handling. 309 } 310 OH_AVFormat_Destroy(format); 311 ``` 312 313 > **NOTE** 314 > 315 > If an optional parameter is incorrectly configured, the error code **AV_ERR_INVAILD_VAL** is returned. However, **OH_VideoEncoder_Configure()** does not fail. Instead, its execution continues with the default value. 316 3177. Obtain a surface. 318 319 Obtain the OHNativeWindow in surface mode. The surface must be obtained before the encoder is prepared. 320 321 ```c++ 322 // Obtain the surface used for data input. 323 OHNativeWindow *nativeWindow; 324 int32_t ret = OH_VideoEncoder_GetSurface(videoEnc, &nativeWindow); 325 if (ret != AV_ERR_OK) { 326 // Exception handling. 327 } 328 // Use the OHNativeWindow* variable to obtain the address of the data to be filled through the producer interface. 329 ``` 330 For details about how to use the OHNativeWindow* variable-type, see [OHNativeWindow](../../reference/apis-arkgraphics2d/_native_window.md#ohnativewindow). 331 3328. Call **OH_VideoEncoder_Prepare()** to prepare internal resources for the encoder. 333 334 ```c++ 335 int32_t ret = OH_VideoEncoder_Prepare(videoEnc); 336 if (ret != AV_ERR_OK) { 337 // Exception handling. 338 } 339 ``` 340 3419. Call **OH_VideoEncoder_Start()** to start the encoder. 342 343 ```c++ 344 // Configure the paths of the input and output files. 345 std::string_view outputFilePath = "/*yourpath*.h264"; 346 std::unique_ptr<std::ofstream> outputFile = std::make_unique<std::ofstream>(); 347 outputFile->open(outputFilePath.data(), std::ios::out | std::ios::binary | std::ios::ate); 348 // Start the encoder. 349 int32_t ret = OH_VideoEncoder_Start(videoEnc); 350 if (ret != AV_ERR_OK) { 351 // Exception handling. 352 } 353 ``` 354 35510. (Optional) Call **OH_VideoEncoder_SetParameter()** to dynamically configure encoder parameters during running. 356 357 For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs). 358 359 <!--RP8--> 360 ```c++ 361 OH_AVFormat *format = OH_AVFormat_Create(); 362 // Dynamically request IDR frames. 363 OH_AVFormat_SetIntValue(format, OH_MD_KEY_REQUEST_I_FRAME, true); 364 int32_t ret = OH_VideoEncoder_SetParameter(videoEnc, format); 365 if (ret != AV_ERR_OK) { 366 // Exception handling. 367 } 368 OH_AVFormat_Destroy(format); 369 ``` 370 <!--RP8End--> 371 37211. Write the image to encode. 373 374 In step 7, you have configured the OHNativeWindow* variable returned by **OH_VideoEncoder_GetSurface**. The data required for encoding is continuously input by the surface. Therefore, you do not need to process the **OnNeedInputBuffer** callback function or use **OH_VideoEncoder_PushInputBuffer** to input data. 375 <!--RP13--><!--RP13End--> 376 37712. (Optional) Call **OH_VideoEncoder_PushInputParameter()** to notify the encoder that the frame-specific parameter configuration is complete. 378 379 In step 5, you have registered the frame-specific parameter callback function. 380 381 In the code snippet below, the following variables are used: 382 383 - **index**: parameter passed by the callback function **OnNeedInputParameter**, which uniquely corresponds to the buffer. 384 385 ```c++ 386 int32_t ret = OH_VideoEncoder_PushInputParameter(videoEnc, index); 387 if (ret != AV_ERR_OK) { 388 // Exception handling. 389 } 390 ``` 391 39213. Call **OH_VideoEncoder_NotifyEndOfStream()** to notify the encoder of EOS. 393 394 ```c++ 395 // In surface mode, you only need to call this API to notify the encoder of EOS. 396 // In buffer mode, you need to set the AVCODEC_BUFFER_FLAGS_EOS flag and then call OH_VideoEncoder_PushInputBuffer to notify the encoder of EOS. 397 int32_t ret = OH_VideoEncoder_NotifyEndOfStream(videoEnc); 398 if (ret != AV_ERR_OK) { 399 // Exception handling. 400 } 401 ``` 402 40314. Call **OH_VideoEncoder_FreeOutputBuffer()** to release encoded frames. 404 405 In the code snippet below, the following variables are used: 406 407 - **index**: parameter passed by the callback function **OnNewOutputBuffer**, which uniquely corresponds to the buffer. 408 - **buffer**: parameter passed by the callback function **OnNewOutputBuffer**. In surface mode, you cannot obtain the virtual address of the image by calling **OH_AVBuffer_GetAddr**. 409 ```c++ 410 // Obtain the encoded information. 411 OH_AVCodecBufferAttr info; 412 int32_t ret = OH_AVBuffer_GetBufferAttr(buffer, &info); 413 if (ret != AV_ERR_OK) { 414 // Exception handling. 415 } 416 // Write the encoded frame data (specified by buffer) to the output file. 417 outputFile->write(reinterpret_cast<char *>(OH_AVBuffer_GetAddr(buffer)), info.size); 418 // Free the output buffer. index is the index of the buffer. 419 ret = OH_VideoEncoder_FreeOutputBuffer(videoEnc, index); 420 if (ret != AV_ERR_OK) { 421 // Exception handling. 422 } 423 ``` 424 42515. (Optional) Call **OH_VideoEncoder_Flush()** to refresh the encoder. 426 427 After **OH_VideoEncoder_Flush** is called, the encoder remains in the Running state, but the input and output data and parameter set (such as the H.264 PPS/SPS) buffered in the encoder are cleared. 428 429 To continue encoding, you must call **OH_VideoEncoder_Start** again. 430 431 ```c++ 432 // Refresh the encoder. 433 int32_t ret = OH_VideoEncoder_Flush(videoEnc); 434 if (ret != AV_ERR_OK) { 435 // Exception handling. 436 } 437 // Start encoding again. 438 ret = OH_VideoEncoder_Start(videoEnc); 439 if (ret != AV_ERR_OK) { 440 // Exception handling. 441 } 442 ``` 443 44416. (Optional) Call **OH_VideoEncoder_Reset()** to reset the encoder. 445 446 After **OH_VideoEncoder_Reset** is called, the encoder returns to the Initialized state. To continue, you must call **OH_VideoEncoder_Configure** and **OH_VideoEncoder_Prepare** again. 447 448 ```c++ 449 // Reset the encoder. 450 int32_t ret = OH_VideoEncoder_Reset(videoEnc); 451 if (ret != AV_ERR_OK) { 452 // Exception handling. 453 } 454 // Reconfigure the encoder. 455 ret = OH_VideoEncoder_Configure(videoEnc, format); 456 if (ret != AV_ERR_OK) { 457 // Exception handling. 458 } 459 // The encoder is ready again. 460 ret = OH_VideoEncoder_Prepare(videoEnc); 461 if (ret != AV_ERR_OK) { 462 // Exception handling. 463 } 464 ``` 465 46617. (Optional) Call **OH_VideoEncoder_Stop()** to stop the encoder. 467 468 After **OH_VideoEncoder_Stop** is called, the encoder retains the encoding instance and releases the input and output buffers. You can directly call **OH_VideoEncoder_Start** to continue encoding. 469 470 The first **buffer** passed must carry the parameter set, starting from the IDR frame. 471 472 ```c++ 473 // Stop the encoder. 474 int32_t ret = OH_VideoEncoder_Stop(videoEnc); 475 if (ret != AV_ERR_OK) { 476 // Exception handling. 477 } 478 ``` 479 48018. Call **OH_VideoEncoder_Destroy()** to destroy the encoder instance and release resources. 481 482 > **NOTE** 483 > 484 > This API cannot be called in the callback function. 485 > After the call, you must set the encoder to NULL to prevent program errors caused by wild pointers. 486 487 ```c++ 488 // Release the nativeWindow instance. 489 if(nativeWindow != NULL){ 490 int32_t ret = OH_NativeWindow_DestroyNativeWindow(nativeWindow); 491 nativeWindow = NULL; 492 } 493 if (ret != AV_ERR_OK) { 494 // Exception handling. 495 } 496 // Call OH_VideoEncoder_Destroy to destroy the encoder. 497 if (videoEnc != NULL) { 498 ret = OH_VideoEncoder_Destroy(videoEnc); 499 videoEnc = NULL; 500 } 501 if (ret != AV_ERR_OK) { 502 // Exception handling. 503 } 504 ``` 505 506### Buffer Input 507 508The following walks you through how to implement the entire video encoding process in buffer mode. It uses the YUV file input and H.264 encoding format as an example. 509Currently, the VideoEncoder module supports only data rotation in asynchronous mode. 510 5111. Add the header files. 512 513 ```cpp 514 #include <multimedia/player_framework/native_avcodec_videoencoder.h> 515 #include <multimedia/player_framework/native_avcapability.h> 516 #include <multimedia/player_framework/native_avcodec_base.h> 517 #include <multimedia/player_framework/native_avformat.h> 518 #include <multimedia/player_framework/native_avbuffer.h> 519 #include <fstream> 520 ``` 521 5222. Create an encoder instance. 523 524 The procedure is the same as that in surface mode and is not described here. 525 526 ```c++ 527 // Create an encoder by name. If your application has special requirements, for example, expecting an encoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first. 528 OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, true); 529 const char *codecName = OH_AVCapability_GetName(capability); 530 OH_AVCodec *videoEnc = OH_VideoEncoder_CreateByName(codecName); 531 ``` 532 533 ```c++ 534 // Create an encoder by MIME type. Only specific codecs recommended by the system can be created in this way. 535 // If multiple codecs need to be created, create hardware encoder instances first. If the hardware resources are insufficient, create software encoder instances. 536 OH_AVCodec *videoEnc = OH_VideoEncoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC); 537 ``` 538 5393. Call **OH_VideoEncoder_RegisterCallback()** to register the callback functions. 540 541 Register the **OH_AVCodecCallback** struct that defines the following callback function pointers: 542 - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror). 543 - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, format change. 544 - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required, which means that the encoder is ready for receiving YUV/RGB data. 545 - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that encoding is complete. 546 547 You need to process the callback functions to ensure that the encoder runs properly. 548 549 <!--RP2--><!--RP2End--> 550 551 <!--RP9--> 552 ```c++ 553 bool isFirstFrame = true; 554 ``` 555 <!--RP9End--> 556 557 ```c++ 558 // Implement the OH_AVCodecOnError callback function. 559 static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData) 560 { 561 // Process the error code in the callback. 562 (void)codec; 563 (void)errorCode; 564 (void)userData; 565 } 566 ``` 567 568 ```c++ 569 // Implement the OH_AVCodecOnStreamChanged callback function. 570 static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData) 571 { 572 // This callback is useless in encoding scenarios. 573 (void)codec; 574 (void)format; 575 (void)userData; 576 } 577 ``` 578 579 ```c++ 580 // Implement the OH_AVCodecOnNeedInputBuffer callback function. 581 static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData) 582 { 583 // The index of the input frame buffer is sent to InIndexQueue. 584 // The input frame data (specified by buffer) is sent to InBufferQueue. 585 // Obtain the video width stride and height stride. 586 if (isFirstFrame) { 587 OH_AVFormat *format = OH_VideoEncoder_GetInputDescription(codec); 588 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_STRIDE, &widthStride); 589 OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_SLICE_HEIGHT, &heightStride); 590 OH_AVFormat_Destroy(format); 591 isFirstFrame = false; 592 } 593 // Process the data. 594 // Write the image to encode. 595 // Notify the encoder of EOS. 596 } 597 ``` 598 599 <!--RP10--> 600 ```c++ 601 // Implement the OH_AVCodecOnNewOutputBuffer callback function. 602 static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData) 603 { 604 // The index of the output frame buffer is sent to outIndexQueue. 605 // The encoded frame data (specified by buffer) is sent to outBufferQueue. 606 // Process the data. 607 // Release the encoded frame. 608 } 609 ``` 610 <!--RP10End--> 611 612 ```c++ 613 // Call OH_VideoEncoder_RegisterCallback() to register the callback functions. 614 OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer}; 615 int32_t ret = OH_VideoEncoder_RegisterCallback(videoEnc, cb, NULL); 616 if (ret != AV_ERR_OK) { 617 // Exception handling. 618 } 619 ``` 620 621 > **NOTE** 622 > 623 > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue. 624 > 625 6264. Call **OH_VideoEncoder_Configure()** to configure the encoder. 627 628 The procedure is the same as that in surface mode and is not described here. 629 630 ```c++ 631 OH_AVFormat *format = OH_AVFormat_Create(); 632 // Set the format. 633 OH_AVFormat_SetIntValue(format, OH_MD_KEY_WIDTH, width); 634 OH_AVFormat_SetIntValue(format, OH_MD_KEY_HEIGHT, height); 635 OH_AVFormat_SetIntValue(format, OH_MD_KEY_PIXEL_FORMAT, DEFAULT_PIXELFORMAT); 636 // Configure the encoder. 637 int32_t ret = OH_VideoEncoder_Configure(videoEnc, format); 638 if (ret != AV_ERR_OK) { 639 // Exception handling. 640 } 641 OH_AVFormat_Destroy(format); 642 ``` 643 6445. Call **OH_VideoEncoder_Prepare()** to prepare internal resources for the encoder. 645 646 ```c++ 647 ret = OH_VideoEncoder_Prepare(videoEnc); 648 if (ret != AV_ERR_OK) { 649 // Exception handling. 650 } 651 ``` 652 6536. Call **OH_VideoEncoder_Start()** to start the encoder. 654 655 As soon as the encoder starts, the callback functions will be triggered to respond to events. Therefore, you must configure the input file and output file first. 656 657 ```c++ 658 // Configure the paths of the input and output files. 659 std::string_view inputFilePath = "/*yourpath*.yuv"; 660 std::string_view outputFilePath = "/*yourpath*.h264"; 661 std::unique_ptr<std::ifstream> inputFile = std::make_unique<std::ifstream>(); 662 std::unique_ptr<std::ofstream> outputFile = std::make_unique<std::ofstream>(); 663 inputFile->open(inputFilePath.data(), std::ios::in | std::ios::binary); 664 outputFile->open(outputFilePath.data(), std::ios::out | std::ios::binary | std::ios::ate); 665 // Start the encoder. 666 int32_t ret = OH_VideoEncoder_Start(videoEnc); 667 if (ret != AV_ERR_OK) { 668 // Exception handling. 669 } 670 ``` 671 6727. (Optional) Dynamically configure encoder parameters during running. 673 674 <!--RP11--> 675 ```c++ 676 OH_AVFormat *format = OH_AVFormat_Create(); 677 // Dynamically request IDR frames. 678 OH_AVFormat_SetIntValue(format, OH_MD_KEY_REQUEST_I_FRAME, true); 679 int32_t ret = OH_VideoEncoder_SetParameter(videoEnc, format); 680 if (ret != AV_ERR_OK) { 681 // Exception handling. 682 } 683 OH_AVFormat_Destroy(format); 684 ``` 685 <!--RP11End--> 686 6878. Call **OH_VideoEncoder_PushInputBuffer()** to push the image to the input queue for encoding. 688 689 In the code snippet below, the following variables are used: 690 691 - **buffer**: parameter passed in by the callback function **OnNeedInputBuffer**. You can call **OH_AVBuffer_GetAddr()** to obtain the pointer to the shared memory address. 692 - **index**: parameter passed by the callback function **OnNeedInputBuffer**, which uniquely corresponds to the buffer. 693 - **flags**: type of the buffer flag. For details, see [OH_AVCodecBufferFlags](../../reference/apis-avcodec-kit/_core.md#oh_avcodecbufferflags). 694 - **stride**: stride of the obtained buffer data. 695 696 ```c++ 697 if (stride == width) { 698 // Process the file stream and obtain the frame length, and then write the data to encode to the buffer of the specified index. 699 int32_t frameSize = width * height * 3 / 2; // Formula for calculating the data size of each frame in NV12 pixel format. 700 inputFile->read(reinterpret_cast<char *>(OH_AVBuffer_GetAddr(buffer)), frameSize); 701 } else { 702 // If the stride is not equal to the width, perform offset based on the stride. For details, see the following example. 703 } 704 // Configure the buffer information. 705 OH_AVCodecBufferAttr info; 706 info.size = frameSize; 707 info.offset = 0; 708 info.pts = 0; 709 info.flags = flags; 710 ret = OH_AVBuffer_SetBufferAttr(buffer, &info); 711 if (ret != AV_ERR_OK) { 712 // Exception handling. 713 } 714 // Send the data to the input buffer for encoding. index is the index of the buffer. 715 int32_t ret = OH_VideoEncoder_PushInputBuffer(videoEnc, index); 716 if (ret != AV_ERR_OK) { 717 // Exception handling. 718 } 719 ``` 720 Offset the stride. The following uses an NV12 image as an example, presenting the image layout of **width**, **height**, **wStride**, and **hStride**. 721 722 - **OH_MD_KEY_VIDEO_PIC_WIDTH** corresponds to **width**. 723 - **OH_MD_KEY_VIDEO_PIC_HEIGHT** corresponds to **height**. 724 - **OH_MD_KEY_VIDEO_STRIDE** corresponds to **wStride**. 725 - **OH_MD_KEY_VIDEO_SLICE_HEIGHT** corresponds to **hStride**. 726 727  728 729 Add the header files. 730 731 ```c++ 732 #include <string.h> 733 ``` 734 The following is the sample code: 735 736 ```c++ 737 struct Rect // Width and height of the source buffer. They are set by the caller. 738 { 739 int32_t width; 740 int32_t height; 741 }; 742 743 struct DstRect // Width stride and height stride of the destination buffer. They are obtained by calling OnNeedInputBuffer. 744 { 745 int32_t wStride; 746 int32_t hStride; 747 }; 748 749 struct SrcRect // Width stride and height stride of the source buffer. They are set by the caller. 750 { 751 int32_t wStride; 752 int32_t hStride; 753 }; 754 755 Rect rect = {320, 240}; 756 DstRect dstRect = {320, 250}; 757 SrcRect srcRect = {320, 250}; 758 uint8_t* dst = new uint8_t[dstRect.hStride * dstRect.wStride]; // Pointer to the target memory area. 759 uint8_t* src = new uint8_t[srcRect.hStride * srcRect.wStride]; // Pointer to the source memory area. 760 761 // Y: Copy the source data in the Y region to the target data in another region. 762 for (int32_t i = 0; i < rect.height; ++i) { 763 // Copy a row of data from the source to a row of the target. 764 memcpy_s(dst, src, rect.width); 765 // Update the pointers to the source data and target data to copy the next row. The pointers to the source data and target data are moved downwards by one wStride each time the source data and target data are updated. 766 dst += dstRect.wStride; 767 src += srcRect.wStride; 768 } 769 // padding 770 // Update the pointers to the source data and target data. The pointers move downwards by one padding. 771 dst += (dstRect.hStride - rect.height) * dstRect.wStride; 772 src += (srcRect.hStride - rect.height) * srcRect.wStride; 773 rect.height >>= 1; 774 // UV: Copy the source data in the UV region to the target data in another region. 775 for (int32_t i = 0; i < rect.height; ++i) { 776 memcpy_s(dst, src, rect.width); 777 dst += dstRect.wStride; 778 src += srcRect.wStride; 779 } 780 781 delete[] dst; 782 dst = nullptr; 783 delete[] src; 784 src = nullptr; 785 ``` 786 When processing buffer data (before pushing data) during hardware encoding, you must copy the image data after width and height alignment to the input callback AVBuffer. Generally, copy the image width, height, stride, and pixel format to ensure correct processing of the data to encode. For details, see step 3 in [Buffer Input](#buffer-input). 787 7889. Notify the encoder of EOS. 789 790 In the code snippet below, the following variables are used: 791 - **index**: parameter passed by the callback function **OnNeedInputBuffer**, which uniquely corresponds to the buffer. 792 - **buffer**: parameter passed in by the callback function **OnNeedInputBuffer**. You can call **OH_AVBuffer_GetAddr()** to obtain the pointer to the shared memory address. 793 794 The API **OH_VideoEncoder_PushInputBuffer** is used to notify the encoder of EOS. This API is also used in step 8 to push the stream to the input queue for encoding. Therefore, in the current step, you must pass in the **AVCODEC_BUFFER_FLAGS_EOS** flag. 795 796 ```c++ 797 OH_AVCodecBufferAttr info; 798 info.size = 0; 799 info.offset = 0; 800 info.pts = 0; 801 info.flags = AVCODEC_BUFFER_FLAGS_EOS; 802 int32_t ret = OH_AVBuffer_SetBufferAttr(buffer, &info); 803 if (ret != AV_ERR_OK) { 804 // Exception handling. 805 } 806 ret = OH_VideoEncoder_PushInputBuffer(videoEnc, index); 807 if (ret != AV_ERR_OK) { 808 // Exception handling. 809 } 810 ``` 811 81210. Call **OH_VideoEncoder_FreeOutputBuffer()** to release encoded frames. 813 814 The procedure is the same as that in surface mode and is not described here. 815 816 ```c++ 817 // Obtain the encoded information. 818 OH_AVCodecBufferAttr info; 819 int32_t ret = OH_AVBuffer_GetBufferAttr(buffer, &info); 820 if (ret != AV_ERR_OK) { 821 // Exception handling. 822 } 823 // Write the encoded frame data (specified by buffer) to the output file. 824 outputFile->write(reinterpret_cast<char *>(OH_AVBuffer_GetAddr(buffer)), info.size); 825 // Free the output buffer. index is the index of the buffer. 826 ret = OH_VideoEncoder_FreeOutputBuffer(videoEnc, index); 827 if (ret != AV_ERR_OK) { 828 // Exception handling. 829 } 830 ``` 831 832The subsequent processes (including refreshing, resetting, stopping, and destroying the encoder) are the same as those in surface mode. For details, see steps 15–18 in [Surface Input](#surface-input). 833