• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Video Encoding
2
3You can call the native APIs provided by the VideoEncoder module to encode a video, that is, to compress video data into a video stream.
4
5<!--RP3--><!--RP3End-->
6
7For details about the supported encoding capabilities, see [AVCodec Supported Formats](avcodec-support-formats.md#video-encoding).
8
9<!--RP1--><!--RP1End-->
10
11The following table lists the video encoding capabilities supported:
12
13<!--RP4-->
14|          Capability                      |                              How to Use                                           |
15| --------------------------------------- | ---------------------------------------------------------------------------------- |
16| Layered encoding<br> Setting the LTR frame and reference frame                     | For details, see [Temporally Scalable Video Coding](video-encoding-temporal-scalability.md).       |
17<!--RP4End-->
18
19## Constraints
20
21- The buffer mode does not support 10-bit image data.
22- Due to limited hardware encoder resources, you must call **OH_VideoEncoder_Destroy** to destroy every encoder instance when it is no longer needed.
23- If **flush()**, **reset()**, **stop()**, or **destroy()** is executed in a non-callback thread, the execution result is returned after all callbacks are executed.
24- Once **Flush**, **Reset**, or **Stop** is called, the system reclaims the OH_AVBuffer. Therefore, do not continue to operate the OH_AVBuffer obtained through the previous callback function.
25- The buffer mode and surface mode use the same APIs. Therefore, the surface mode is described as an example.
26- In buffer mode, after obtaining the pointer to an OH_AVBuffer instance through the callback function **OH_AVCodecOnNeedInputBuffer**, call **OH_VideoEncoder_PushInputBuffer** to notify the system that the buffer has been fully utilized. In this way, the system will proceed with encoding the data contained in the buffer. If the OH_NativeBuffer instance is obtained through **OH_AVBuffer_GetNativeBuffer** and its lifecycle extends beyond that of the OH_AVBuffer pointer instance, you mut perform data duplication. In this case, you should manage the lifecycle of the newly generated OH_NativeBuffer object to ensure that the object can be correctly used and released.
27<!--RP14--><!--RP14End-->
28
29## Surface Input and Buffer Input
30
31- Surface input and buffer input differ in data sources.
32
33- They are applicable to different scenarios.
34  - Surface input indicates that the OHNativeWindow is used to transfer passed-in data. It supports connection with other modules, such as the camera module.
35  - Buffer input refers to a pre-allocated memory area. The caller needs to copy original data to this memory area. It is more applicable to scenarios such as reading video data from files.
36
37- The two also differ slightly in the API calling modes:
38  - In buffer mode, the caller calls **OH_VideoEncoder_PushInputBuffer** to input data. In surface mode, the caller, before the encoder is ready, calls **OH_VideoEncoder_GetSurface** to obtain the OHNativeWindow for video data transmission.
39  - In buffer mode, you can use **attr** in **OH_AVBuffer** to pass in the End of Stream (EOS) flag, and the encoder stops when it reads the last frame. In surface mode, the caller calls **OH_VideoEncoder_NotifyEndOfStream** to notify the encoder of EOS.
40
41For details about the development procedure, see [Surface Input](#surface-input) and [Buffer Input](#buffer-input).
42
43## State Machine Interaction
44
45The following figure shows the interaction between states.
46
47![Invoking relationship of state](figures/state-invocation.png)
48
491. An encoder enters the Initialized state in either of the following ways:
50   - When an encoder instance is initially created, the encoder enters the Initialized state.
51   - When **OH_VideoEncoder_Reset** is called in any state, the encoder returns to the Initialized state.
52
532. When the encoder is in the Initialized state, you can call **OH_VideoEncoder_Configure** to configure the encoder. After the configuration, the encoder enters the Configured state.
543. When the encoder is in the Configured state, you can call **OH_VideoEncoder_Prepare()** to switch it to the Prepared state.
554. When the encoder is in the Prepared state, you can call **OH_VideoEncoder_Start** to switch it to the Executing state.
56   - When the encoder is in the Executing state, you can call **OH_VideoEncoder_Stop** to switch it back to the Prepared state.
57
585. In rare cases, the encoder may encounter an error and enter the Error state. If this is the case, an invalid value can be returned or an exception can be thrown through a queue operation.
59   - When the encoder is in the Error state, you can either call **OH_VideoEncoder_Reset** to switch it to the Initialized state or call **OH_VideoEncoder_Destroy** to switch it to the Released state.
60
616. The Executing state has three substates: Flushed, Running, and End-of-Stream.
62   - After **OH_VideoEncoder_Start** is called, the encoder enters the Running substate immediately.
63   - When the encoder is in the Executing state, you can call **OH_VideoEncoder_Flush** to switch it to the Flushed substate.
64   - After all data to be processed is transferred to the encoder, the [AVCODEC_BUFFER_FLAGS_EOS](../../reference/apis-avcodec-kit/_core.md#oh_avcodecbufferflags-1) flag is added to the last input buffer in the input buffers queue. Once this flag is detected, the encoder transits to the End-of-Stream substate. In this state, the encoder does not accept new inputs, but continues to generate outputs until it reaches the tail frame.
65
667. When the encoder is no longer needed, you must call **OH_VideoEncoder_Destroy** to destroy the encoder instance, which then transitions to the Released state.
67
68## How to Develop
69
70Read [VideoEncoder](../../reference/apis-avcodec-kit/_video_encoder.md) for the API reference.
71
72The figure below shows the call relationship of video encoding.
73
74- The dotted line indicates an optional operation.
75
76- The solid line indicates a mandatory operation.
77
78![Call relationship of video encoding](figures/video-encode.png)
79
80### Linking the Dynamic Libraries in the CMake Script
81
82```cmake
83target_link_libraries(sample PUBLIC libnative_media_codecbase.so)
84target_link_libraries(sample PUBLIC libnative_media_core.so)
85target_link_libraries(sample PUBLIC libnative_media_venc.so)
86```
87
88> **NOTE**
89>
90> The word **sample** in the preceding code snippet is only an example. Use the actual project directory name.
91>
92
93### Defining the Basic Structure
94
95The sample code provided in this section adheres to the C++17 standard and is for reference only. You can define your own buffer objects by referring to it.
96
971. Add the header files.
98
99    ```c++
100    #include <condition_variable>
101    #include <memory>
102    #include <mutex>
103    #include <queue>
104    #include <shared_mutex>
105    ```
106
1072. Define the information about the encoder callback buffer.
108
109    ```c++
110    struct CodecBufferInfo {
111        CodecBufferInfo(uint32_t index, OH_AVBuffer *buffer): index(index), buffer(buffer), isValid(true) {}
112        CodecBufferInfo(uint32_t index, OH_AVFormat *parameter): index(index), parameter(parameter), isValid(true) {}
113        // Callback buffer.
114        OH_AVBuffer *buffer = nullptr;
115        // In surface mode, pass the frame-specific parameter of the callback, which can be used only after the frame-specific parameter callback function is registered.
116        OH_AVFormat *parameter = nullptr;
117        // Index of the callback buffer.
118        uint32_t index = 0;
119        // Check whether the current buffer information is valid.
120        bool isValid = true;
121    };
122    ```
123
1243. Define the input and output queue for encoding.
125
126    ```c++
127    class CodecBufferQueue {
128    public:
129        // Pass the callback buffer information to the queue.
130        void Enqueue(const std::shared_ptr<CodecBufferInfo> bufferInfo)
131        {
132            std::unique_lock<std::mutex> lock(mutex_);
133            bufferQueue_.push(bufferInfo);
134            cond_.notify_all();
135        }
136
137        // Obtain the information about the callback buffer.
138        std::shared_ptr<CodecBufferInfo> Dequeue(int32_t timeoutMs = 1000)
139        {
140            std::unique_lock<std::mutex> lock(mutex_);
141            (void)cond_.wait_for(lock, std::chrono::milliseconds(timeoutMs), [this]() { return !bufferQueue_.empty(); });
142            if (bufferQueue_.empty()) {
143                return nullptr;
144            }
145            std::shared_ptr<CodecBufferInfo> bufferInfo = bufferQueue_.front();
146            bufferQueue_.pop();
147            return bufferInfo;
148        }
149
150        // Clear the queue. The previous callback buffer becomes unavailable.
151        void Flush()
152        {
153            std::unique_lock<std::mutex> lock(mutex_);
154            while (!bufferQueue_.empty()) {
155                std::shared_ptr<CodecBufferInfo> bufferInfo = bufferQueue_.front();
156                // After the flush, stop, reset, and destroy operations are performed, the previous callback buffer information is invalid.
157                bufferInfo->isValid = false;
158                bufferQueue_.pop();
159            }
160        }
161
162    private:
163        std::mutex mutex_;
164        std::condition_variable cond_;
165        std::queue<std::shared_ptr<CodecBufferInfo>> bufferQueue_;
166    };
167    ```
168
1694. Define global variables.
170
171    These global variables are for reference only. They can be encapsulated into an object based on service requirements.
172
173    ```c++
174    // Video frame width.
175    int32_t width = 320;
176    // Video frame height.
177    int32_t height = 240;
178    // Video pixel format.
179     OH_AVPixelFormat pixelFormat = AV_PIXEL_FORMAT_NV12;
180    // Video width stride.
181    int32_t widthStride = 0;
182    // Video height stride.
183    int32_t heightStride = 0;
184    // Pointer to the encoder instance.
185    OH_AVCodec *videoEnc = nullptr;
186    // Encoder synchronization lock.
187    std::shared_mutex codecMutex;
188    // Encoder input queue.
189    CodecBufferQueue inQueue;
190    // Encoder output queue.
191    CodecBufferQueue outQueue;
192    ```
193
194### Surface Input
195
196The following walks you through how to implement the entire video encoding process in surface mode. In this example, surface data is input and encoded into a H.264 stream.
197
198Currently, the VideoEncoder module supports only data rotation in asynchronous mode.
199
2001. Add the header files.
201
202    ```c++
203    #include <multimedia/player_framework/native_avcodec_videoencoder.h>
204    #include <multimedia/player_framework/native_avcapability.h>
205    #include <multimedia/player_framework/native_avcodec_base.h>
206    #include <multimedia/player_framework/native_avformat.h>
207    #include <multimedia/player_framework/native_avbuffer.h>
208    #include <fstream>
209    ```
210
2112. Create an encoder instance.
212
213    You can create an encoder by name or MIME type. In the code snippet below, the following variables are used:
214
215    - **videoEnc**: pointer to the video encoder instance.
216    - **capability**: pointer to the encoder's capability.
217    - **OH_AVCODEC_MIMETYPE_VIDEO_AVC**: AVC video codec.
218
219    The following is an example:
220
221    ```c++
222    // Create an encoder by name. If your application has special requirements, for example, expecting an encoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first.
223    OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, true);
224    // Create a hardware encoder instance.
225    OH_AVCapability *capability= OH_AVCodec_GetCapabilityByCategory(OH_AVCODEC_MIMETYPE_VIDEO_AVC, true, HARDWARE);
226    const char *codecName = OH_AVCapability_GetName(capability);
227    OH_AVCodec *videoEnc = OH_VideoEncoder_CreateByName(codecName);
228    ```
229
230    ```c++
231    // Create an encoder by MIME type. Only specific codecs recommended by the system can be created in this way.
232    // Only hardware encoders can be created.
233    OH_AVCodec *videoEnc = OH_VideoEncoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC);
234    ```
235
2363. Call **OH_VideoEncoder_RegisterCallback()** to register the callback functions.
237
238    Register the **OH_AVCodecCallback** struct that defines the following callback function pointers:
239
240    - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror).
241    - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, format change.
242    - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required. This callback does not take effect, since you input data through the obtained surface.
243    - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that encoding is complete.
244
245    <!--RP2--><!--RP2End-->
246
247    The following is an example:
248
249    <!--RP5-->
250    ```c++
251    // Set the OH_AVCodecOnError callback function, which is used to report a codec operation error.
252    static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData)
253    {
254        // Process the error code in the callback.
255        (void)codec;
256        (void)errorCode;
257        (void)userData;
258    }
259    ```
260    <!--RP5End-->
261
262    <!--RP12-->
263    ```c++
264    // Set the OH_AVCodecOnStreamChanged callback function, which is used to report an encoding stream change.
265    static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData)
266    {
267        // In surface mode, this callback function is triggered when the surface resolution changes.
268        (void)codec;
269        (void)userData;
270        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_WIDTH, &width);
271        OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_HEIGHT, &height);
272    }
273    ```
274    <!--RP12End-->
275
276    ```c++
277    // Set the OH_AVCodecOnNeedInputBuffer callback function, which is used to send an input frame to the data queue.
278    static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
279    {
280        // In surface mode, this callback function does not take effect. Data is input through the obtained surface.
281        (void)userData;
282        (void)index;
283        (void)buffer;
284    }
285    ```
286
287    <!--RP6-->
288    ```c++
289    // Set the OH_AVCodecOnNewOutputBuffer callback function, which is used to send an encoded frame to the output queue.
290    static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
291    {
292        // The data buffer of the finished frame and its index are sent to outQueue.
293        (void)codec;
294        (void)userData;
295        outQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
296    }
297    ```
298    <!--RP6End-->
299
300    ```c++
301    // Call OH_VideoEncoder_RegisterCallback() to register the callback functions.
302    OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer};
303    int32_t ret = OH_VideoEncoder_RegisterCallback(videoEnc, cb, nullptr); // nullptr: userData is null.
304    if (ret != AV_ERR_OK) {
305        // Handle exceptions.
306    }
307    ```
308
309    > **NOTE**
310    >
311    > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue.
312
3134. (Optional) Call **OH_VideoEncoder_RegisterParameterCallback()** to register the frame-specific parameter callback function.
314
315    For details, see [Temporally Scalable Video Coding](video-encoding-temporal-scalability.md).
316
317    <!--RP7-->
318    ```c++
319    // 4.1 Implement the OH_VideoEncoder_OnNeedInputParameter callback function.
320    static void OnNeedInputParameter(OH_AVCodec *codec, uint32_t index, OH_AVFormat *parameter, void *userData)
321    {
322        // The data parameter of the input frame and its index are sent to inQueue.
323        inQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, parameter));
324    }
325
326    // 4.2 Register the frame-specific parameter callback function.
327    OH_VideoEncoder_OnNeedInputParameter inParaCb = OnNeedInputParameter;
328    OH_VideoEncoder_RegisterParameterCallback(videoEnc, inParaCb, nullptr); // nullptr: userData is null.
329    ```
330    <!--RP7End-->
331
3325. Call **OH_VideoEncoder_Configure()** to configure the encoder.
333
334    For details about the configurable options, see [Video Dedicated Key-Value Paris](../../reference/apis-avcodec-kit/_codec_base.md#media-data-key-value-pairs).
335
336    For details about the parameter verification rules, see [OH_VideoEncoder_Configure()](../../reference/apis-avcodec-kit/_video_encoder.md#oh_videoencoder_configure).
337
338    The parameter value ranges can be obtained through the capability query interface. For details, see [Obtaining Supported Codecs](obtain-supported-codecs.md).
339
340    Currently, the following options must be configured for all supported formats: video frame width, video frame height, and video pixel format. In the code snippet below, the following variables are used:
341
342    - **DEFAULT_WIDTH**: 320 pixels
343    - **DEFAULT_HEIGHT**: 240 pixels
344    - **DEFAULT_PIXELFORMAT**: **AV_PIXEL_FORMAT_NV12** (the pixel format of the YUV file is NV12)
345
346    ```c++
347    // Configure the video frame rate.
348    double frameRate = 30.0;
349    // Configure the video YUV range flag.
350    bool rangeFlag = false;
351    // Configure the video primary color.
352    int32_t primary = static_cast<int32_t>(OH_ColorPrimary::COLOR_PRIMARY_BT709);
353    // Configure the transfer characteristics.
354    int32_t transfer = static_cast<int32_t>(OH_TransferCharacteristic::TRANSFER_CHARACTERISTIC_BT709);
355    // Configure the maximum matrix coefficient.
356    int32_t matrix = static_cast<int32_t>(OH_MatrixCoefficient::MATRIX_COEFFICIENT_IDENTITY);
357    // Configure the encoding profile.
358    int32_t profile = static_cast<int32_t>(OH_AVCProfile::AVC_PROFILE_HIGH);
359    // Configure the encoding bit rate mode.
360    int32_t rateMode = static_cast<int32_t>(OH_BitrateMode::BITRATE_MODE_VBR);
361    // Configure the key frame interval, in milliseconds.
362    int32_t iFrameInterval = 1000;
363    // Configure the bit rate, in bit/s.
364    int64_t bitRate = 5000000;
365    // Set the encoding quality.
366    int64_t quality = 90;
367
368    OH_AVFormat *format = OH_AVFormat_Create();
369    OH_AVFormat_SetIntValue (format, OH_MD_KEY_WIDTH, width); // Mandatory.
370    OH_AVFormat_SetIntValue(format, OH_MD_KEY_HEIGHT, height); // Mandatory.
371    OH_AVFormat_SetIntValue(format, OH_MD_KEY_PIXEL_FORMAT, pixelFormat); // Mandatory.
372
373    OH_AVFormat_SetDoubleValue(format, OH_MD_KEY_FRAME_RATE, frameRate);
374    OH_AVFormat_SetIntValue(format, OH_MD_KEY_RANGE_FLAG, rangeFlag);
375    OH_AVFormat_SetIntValue(format, OH_MD_KEY_COLOR_PRIMARIES, primary);
376    OH_AVFormat_SetIntValue(format, OH_MD_KEY_TRANSFER_CHARACTERISTICS, transfer);
377    OH_AVFormat_SetIntValue(format, OH_MD_KEY_MATRIX_COEFFICIENTS, matrix);
378    OH_AVFormat_SetIntValue(format, OH_MD_KEY_I_FRAME_INTERVAL, iFrameInterval);
379    OH_AVFormat_SetIntValue(format, OH_MD_KEY_PROFILE, profile);
380    // Configure OH_MD_KEY_QUALITY only when OH_BitrateMode = BITRATE_MODE_CQ is used.
381    if (rateMode == static_cast<int32_t>(OH_BitrateMode::BITRATE_MODE_CQ)) {
382        OH_AVFormat_SetIntValue(format, OH_MD_KEY_QUALITY, quality);
383    } else if (rateMode == static_cast<int32_t>(OH_BitrateMode::BITRATE_MODE_CBR) ||
384               rateMode == static_cast<int32_t>(OH_BitrateMode::BITRATE_MODE_VBR)){
385        OH_AVFormat_SetLongValue(format, OH_MD_KEY_BITRATE, bitRate);
386    }
387    OH_AVFormat_SetIntValue(format, OH_MD_KEY_VIDEO_ENCODE_BITRATE_MODE, rateMode);
388    int32_t ret = OH_VideoEncoder_Configure(videoEnc, format);
389    if (ret != AV_ERR_OK) {
390        // Handle exceptions.
391    }
392    OH_AVFormat_Destroy(format);
393    ```
394
395    > **NOTE**
396    >
397    > If an optional parameter is incorrectly configured, the error code **AV_ERR_INVAILD_VAL** is returned. However, **OH_VideoEncoder_Configure()** does not fail. Instead, its execution continues with the default value.
398
3996. Obtain a surface.
400
401    Obtain the OHNativeWindow in surface mode. The surface must be obtained before **OH_VideoEncoder_Prepare** is called.
402
403    ```c++
404    // Obtain the surface used for data input.
405    OHNativeWindow *nativeWindow;
406    int32_t ret = OH_VideoEncoder_GetSurface(videoEnc, &nativeWindow);
407    if (ret != AV_ERR_OK) {
408        // Handle exceptions.
409    }
410    // Use the OHNativeWindow* variable to obtain the address of the data to be filled through the producer interface.
411    ```
412
413    For details about how to use the OHNativeWindow* variable-type, see [OHNativeWindow](../../reference/apis-arkgraphics2d/_native_window.md#ohnativewindow).
414
4157. Call **OH_VideoEncoder_Prepare()** to prepare internal resources for the encoder.
416
417    ```c++
418    int32_t ret = OH_VideoEncoder_Prepare(videoEnc);
419    if (ret != AV_ERR_OK) {
420        // Handle exceptions.
421    }
422    ```
423
4248. Call **OH_VideoEncoder_Start()** to start the encoder.
425
426    ```c++
427    // Configure the paths of the input and output files.
428    std::string_view outputFilePath = "/*yourpath*.h264";
429    std::unique_ptr<std::ofstream> outputFile = std::make_unique<std::ofstream>();
430    outputFile->open(outputFilePath.data(), std::ios::out | std::ios::binary | std::ios::ate);
431    // Start the encoder.
432    int32_t ret = OH_VideoEncoder_Start(videoEnc);
433    if (ret != AV_ERR_OK) {
434        // Handle exceptions.
435    }
436    ```
437
4389. (Optional) Call **OH_VideoEncoder_SetParameter()** to dynamically configure encoder parameters during running.
439
440    <!--RP8-->
441    ```c++
442    OH_AVFormat *format = OH_AVFormat_Create();
443    // Dynamically request IDR frames.
444    OH_AVFormat_SetIntValue(format, OH_MD_KEY_REQUEST_I_FRAME, true);
445    int32_t ret = OH_VideoEncoder_SetParameter(videoEnc, format);
446    if (ret != AV_ERR_OK) {
447        // Handle exceptions.
448    }
449    OH_AVFormat_Destroy(format);
450    ```
451    <!--RP8End-->
452
45310. Write the image to encode.
454
455    In step 6, you have configured the **OHNativeWindow*** variable type returned by **OH_VideoEncoder_GetSurface**. The data required for encoding is continuously input by the surface. Therefore, you do not need to process the **OnNeedInputBuffer** callback function or use **OH_VideoEncoder_PushInputBuffer** to input data.
456    <!--RP13--><!--RP13End-->
457
45811. (Optional) Call **OH_VideoEncoder_PushInputParameter()** to notify the encoder that the frame-specific parameter configuration is complete.
459
460    In step 4, you have registered the frame-specific parameter callback function.
461
462    In the code snippet below, the following variables are used:
463
464    - **index**: parameter passed by the callback function **OnNeedInputParameter**, which uniquely corresponds to the buffer.
465
466    ```c++
467    std::shared_ptr<CodecBufferInfo> bufferInfo = inQueue.Dequeue();
468    std::shared_lock<std::shared_mutex> lock(codecMutex);
469    if (bufferInfo == nullptr || !bufferInfo->isValid) {
470        // Handle exceptions.
471    }
472    // You can determine the value.
473    int32_t isIFrame;
474    OH_AVFormat_SetIntValue(bufferInfo->parameter, OH_MD_KEY_REQUEST_I_FRAME, isIFrame);
475    int32_t ret = OH_VideoEncoder_PushInputParameter(videoEnc, bufferInfo->index);
476    if (ret != AV_ERR_OK) {
477        // Handle exceptions.
478    }
479    ```
480
48112. Call **OH_VideoEncoder_NotifyEndOfStream()** to notify the encoder of EOS.
482
483    ```c++
484    // In surface mode, you only need to call this API to notify the encoder of EOS.
485    // In buffer mode, you need to set the AVCODEC_BUFFER_FLAGS_EOS flag and then call OH_VideoEncoder_PushInputBuffer to notify the encoder of EOS.
486    int32_t ret = OH_VideoEncoder_NotifyEndOfStream(videoEnc);
487    if (ret != AV_ERR_OK) {
488        // Handle exceptions.
489    }
490    ```
491
49213. Call **OH_VideoEncoder_FreeOutputBuffer()** to release encoded frames.
493
494    In the code snippet below, the following variables are used:
495
496    - **index**: parameter passed by the callback function **OnNewOutputBuffer**, which uniquely corresponds to the buffer.
497    - **buffer**: parameter passed by the callback function **OnNewOutputBuffer**. You can obtain the pointer to the shared memory address by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr).
498
499    ```c++
500    std::shared_ptr<CodecBufferInfo> bufferInfo = outQueue.Dequeue();
501    std::shared_lock<std::shared_mutex> lock(codecMutex);
502    if (bufferInfo == nullptr || !bufferInfo->isValid) {
503        // Handle exceptions.
504    }
505    // Obtain the encoded information.
506    OH_AVCodecBufferAttr info;
507    int32_t ret = OH_AVBuffer_GetBufferAttr(bufferInfo->buffer, &info);
508    if (ret != AV_ERR_OK) {
509        // Handle exceptions.
510    }
511    // Write the encoded frame data (specified by buffer) to the output file.
512    outputFile->write(reinterpret_cast<char *>(OH_AVBuffer_GetAddr(bufferInfo->buffer)), info.size);
513    // Free the output buffer. index is the index of the buffer.
514    ret = OH_VideoEncoder_FreeOutputBuffer(videoEnc, bufferInfo->index);
515    if (ret != AV_ERR_OK) {
516        // Handle exceptions.
517    }
518    ```
519
52014. (Optional) Call **OH_VideoEncoder_Flush()** to refresh the encoder.
521
522    After **OH_VideoEncoder_Flush** is called, the encoder remains in the Running state, but the input and output data and parameter set (such as the H.264 PPS/SPS) buffered in the encoder are cleared.
523
524    To continue encoding, you must call **OH_VideoEncoder_Start** again.
525
526    ```c++
527    std::unique_lock<std::shared_mutex> lock(codecMutex);
528    // Refresh the encoder.
529    int32_t ret = OH_VideoEncoder_Flush(videoEnc);
530    if (ret != AV_ERR_OK) {
531        // Handle exceptions.
532    }
533    inQueue.Flush();
534    outQueue.Flush();
535    // Start encoding again.
536    ret = OH_VideoEncoder_Start(videoEnc);
537    if (ret != AV_ERR_OK) {
538        // Handle exceptions.
539    }
540    ```
541
54215. (Optional) Call **OH_VideoEncoder_Reset()** to reset the encoder.
543
544    After **OH_VideoEncoder_Reset** is called, the encoder returns to the Initialized state. To continue, you must call **OH_VideoEncoder_Configure** and **OH_VideoEncoder_Prepare** again.
545
546    ```c++
547    std::unique_lock<std::shared_mutex> lock(codecMutex);
548    // Reset the encoder.
549    int32_t ret = OH_VideoEncoder_Reset(videoEnc);
550    if (ret != AV_ERR_OK) {
551        // Handle exceptions.
552    }
553    inQueue.Flush();
554    outQueue.Flush();
555    // Reconfigure the encoder.
556    OH_AVFormat *format = OH_AVFormat_Create();
557    ret = OH_VideoEncoder_Configure(videoEnc, format);
558    if (ret != AV_ERR_OK) {
559        // Handle exceptions.
560    }
561    OH_AVFormat_Destroy(format);
562    // The encoder is ready again.
563    ret = OH_VideoEncoder_Prepare(videoEnc);
564    if (ret != AV_ERR_OK) {
565        // Handle exceptions.
566    }
567    ```
568
56916. (Optional) Call **OH_VideoEncoder_Stop()** to stop the encoder.
570
571    After **OH_VideoEncoder_Stop** is called, the encoder retains the encoding instance and releases the input and output buffers. You can directly call **OH_VideoEncoder_Start** to continue encoding.
572
573    The first **buffer** passed must carry the parameter set, starting from the IDR frame.
574
575    ```c++
576    std::unique_lock<std::shared_mutex> lock(codecMutex);
577    // Stop the encoder.
578    int32_t ret = OH_VideoEncoder_Stop(videoEnc);
579    if (ret != AV_ERR_OK) {
580        // Handle exceptions.
581    }
582    inQueue.Flush();
583    outQueue.Flush();
584    ```
585
58617. Call **OH_VideoEncoder_Destroy()** to destroy the encoder instance and release resources.
587
588    > **NOTE**
589    >
590    > This API cannot be called in the callback function.
591    >
592    > After the call, you must set a null pointer to the encoder to prevent program errors caused by wild pointers.
593
594    ```c++
595    std::unique_lock<std::shared_mutex> lock(codecMutex);
596    // Release the nativeWindow instance.
597    if(nativeWindow != nullptr){
598        OH_NativeWindow_DestroyNativeWindow(nativeWindow);
599        nativeWindow = nullptr;
600    }
601    // Call OH_VideoEncoder_Destroy to destroy the encoder.
602    int32_t ret = AV_ERR_OK;
603    if (videoEnc != nullptr) {
604        ret = OH_VideoEncoder_Destroy(videoEnc);
605        videoEnc = nullptr;
606    }
607    if (ret != AV_ERR_OK) {
608        // Handle exceptions.
609    }
610    inQueue.Flush();
611    outQueue.Flush();
612    ```
613
614### Buffer Input
615
616The following walks you through how to implement the entire video encoding process in buffer mode. It uses the YUV file input and H.264 encoding format as an example.
617Currently, the VideoEncoder module supports only data rotation in asynchronous mode.
618
6191. Add the header files.
620
621    ```c++
622    #include <multimedia/player_framework/native_avcodec_videoencoder.h>
623    #include <multimedia/player_framework/native_avcapability.h>
624    #include <multimedia/player_framework/native_avcodec_base.h>
625    #include <multimedia/player_framework/native_avformat.h>
626    #include <multimedia/player_framework/native_avbuffer.h>
627    #include <fstream>
628    ```
629
6302. Create an encoder instance.
631
632    The procedure is the same as that in surface mode and is not described here.
633
634    ```c++
635    // Create an encoder by name. If your application has special requirements, for example, expecting an encoder that supports a certain resolution, you can call OH_AVCodec_GetCapability to query the capability first.
636    OH_AVCapability *capability = OH_AVCodec_GetCapability(OH_AVCODEC_MIMETYPE_VIDEO_AVC, true);
637    const char *codecName = OH_AVCapability_GetName(capability);
638    OH_AVCodec *videoEnc = OH_VideoEncoder_CreateByName(codecName);
639    ```
640
641    ```c++
642    // Create an encoder by MIME type. Only specific codecs recommended by the system can be created in this way.
643    // If multiple codecs need to be created, create hardware encoder instances first. If the hardware resources are insufficient, create software encoder instances.
644    OH_AVCodec *videoEnc = OH_VideoEncoder_CreateByMime(OH_AVCODEC_MIMETYPE_VIDEO_AVC);
645    ```
646
6473. Call **OH_VideoEncoder_RegisterCallback()** to register the callback functions.
648
649    Register the **OH_AVCodecCallback** struct that defines the following callback function pointers:
650    - **OH_AVCodecOnError**, a callback used to report a codec operation error. For details about the error codes, see [OH_AVCodecOnError](../../reference/apis-avcodec-kit/_codec_base.md#oh_avcodeconerror).
651    - **OH_AVCodecOnStreamChanged**, a callback used to report a codec stream change, for example, format change.
652    - **OH_AVCodecOnNeedInputBuffer**, a callback used to report input data required, which means that the encoder is ready for receiving YUV/RGB data.
653    - **OH_AVCodecOnNewOutputBuffer**, a callback used to report output data generated, which means that encoding is complete.
654
655    You need to process the callback functions to ensure that the encoder runs properly.
656
657    <!--RP2--><!--RP2End-->
658
659    <!--RP9-->
660    ```c++
661    bool isFirstFrame = true;
662    ```
663    <!--RP9End-->
664
665    ```c++
666    // Implement the OH_AVCodecOnError callback function.
667    static void OnError(OH_AVCodec *codec, int32_t errorCode, void *userData)
668    {
669        // Process the error code in the callback.
670        (void)codec;
671        (void)errorCode;
672        (void)userData;
673    }
674    ```
675
676    ```c++
677    // Implement the OH_AVCodecOnStreamChanged callback function.
678    static void OnStreamChanged(OH_AVCodec *codec, OH_AVFormat *format, void *userData)
679    {
680        // In buffer mode, this callback function does not take effect.
681        (void)codec;
682        (void)format;
683        (void)userData;
684    }
685    ```
686
687    ```c++
688    // Implement the OH_AVCodecOnNeedInputBuffer callback function.
689    static void OnNeedInputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
690    {
691        // Obtain the video width stride and height stride.
692        if (isFirstFrame) {
693            OH_AVFormat *format = OH_VideoEncoder_GetInputDescription(codec);
694            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_STRIDE, &widthStride);
695            OH_AVFormat_GetIntValue(format, OH_MD_KEY_VIDEO_SLICE_HEIGHT, &heightStride);
696            OH_AVFormat_Destroy(format);
697            isFirstFrame = false;
698        }
699        // The data buffer of the input frame and its index are sent to inQueue.
700        (void)codec;
701        (void)userData;
702        inQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
703    }
704    ```
705
706    <!--RP10-->
707    ```c++
708    // Implement the OH_AVCodecOnNewOutputBuffer callback function.
709    static void OnNewOutputBuffer(OH_AVCodec *codec, uint32_t index, OH_AVBuffer *buffer, void *userData)
710    {
711        // The data buffer of the finished frame and its index are sent to outQueue.
712        (void)userData;
713        outQueue.Enqueue(std::make_shared<CodecBufferInfo>(index, buffer));
714    }
715    ```
716    <!--RP10End-->
717
718    ```c++
719    // Call OH_VideoEncoder_RegisterCallback() to register the callback functions.
720    OH_AVCodecCallback cb = {&OnError, &OnStreamChanged, &OnNeedInputBuffer, &OnNewOutputBuffer};
721    int32_t ret = OH_VideoEncoder_RegisterCallback(videoEnc, cb, nullptr);
722    if (ret != AV_ERR_OK) {
723        // Handle exceptions.
724    }
725    ```
726
727    > **NOTE**
728    >
729    > In the callback functions, pay attention to multi-thread synchronization for operations on the data queue.
730    >
731
7324. Call **OH_VideoEncoder_Configure()** to configure the encoder.
733
734    The procedure is the same as that in surface mode and is not described here.
735
736    ```c++
737    OH_AVFormat *format = OH_AVFormat_Create();
738    // Set the format.
739    OH_AVFormat_SetIntValue (format, OH_MD_KEY_WIDTH, width); // Mandatory.
740    OH_AVFormat_SetIntValue(format, OH_MD_KEY_HEIGHT, height); // Mandatory.
741    OH_AVFormat_SetIntValue(format, OH_MD_KEY_PIXEL_FORMAT, pixelFormat); // Mandatory.
742    // Configure the encoder.
743    int32_t ret = OH_VideoEncoder_Configure(videoEnc, format);
744    if (ret != AV_ERR_OK) {
745        // Handle exceptions.
746    }
747    OH_AVFormat_Destroy(format);
748    ```
749
7505. Call **OH_VideoEncoder_Prepare()** to prepare internal resources for the encoder.
751
752
753
754    ```c++
755    ret = OH_VideoEncoder_Prepare(videoEnc);
756    if (ret != AV_ERR_OK) {
757        // Handle exceptions.
758    }
759    ```
760
7616. Call **OH_VideoEncoder_Start()** to start the encoder.
762
763    As soon as the encoder starts, the callback functions will be triggered to respond to events. Therefore, you must configure the input file and output file first.
764
765    ```c++
766    // Configure the paths of the input and output files.
767    std::string_view inputFilePath = "/*yourpath*.yuv";
768    std::string_view outputFilePath = "/*yourpath*.h264";
769    std::unique_ptr<std::ifstream> inputFile = std::make_unique<std::ifstream>();
770    std::unique_ptr<std::ofstream> outputFile = std::make_unique<std::ofstream>();
771    inputFile->open(inputFilePath.data(), std::ios::in | std::ios::binary);
772    outputFile->open(outputFilePath.data(), std::ios::out | std::ios::binary | std::ios::ate);
773    // Start the encoder.
774    int32_t ret = OH_VideoEncoder_Start(videoEnc);
775    if (ret != AV_ERR_OK) {
776        // Handle exceptions.
777    }
778    ```
779
7807. (Optional) Dynamically configure encoder parameters during running.
781
782   <!--RP11-->
783    ```c++
784    OH_AVFormat *format = OH_AVFormat_Create();
785    // Dynamically request IDR frames.
786    OH_AVFormat_SetIntValue(format, OH_MD_KEY_REQUEST_I_FRAME, true);
787    int32_t ret = OH_VideoEncoder_SetParameter(videoEnc, format);
788    if (ret != AV_ERR_OK) {
789        // Handle exceptions.
790    }
791    OH_AVFormat_Destroy(format);
792    ```
793    <!--RP11End-->
794
7958. Call **OH_VideoEncoder_PushInputBuffer()** to push the image to the input queue for encoding.
796
797    In the code snippet below, the following variables are used:
798
799    - **buffer**: parameter passed by the callback function **OnNeedInputBuffer**. You can obtain the pointer to the shared memory address by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr).
800    - **index**: parameter passed by the callback function **OnNeedInputBuffer**, which uniquely corresponds to the buffer.
801    - **flags**: type of the buffer flag. For details, see [OH_AVCodecBufferFlags](../../reference/apis-avcodec-kit/_core.md#oh_avcodecbufferflags).
802    - **widthStride**: stride of the obtained buffer data.
803
804    ```c++
805    std::shared_ptr<CodecBufferInfo> bufferInfo = inQueue.Dequeue();
806    std::shared_lock<std::shared_mutex> lock(codecMutex);
807    if (bufferInfo == nullptr || !bufferInfo->isValid) {
808        // Handle exceptions.
809    }
810    // Write image data.
811    if (widthStride == width) {
812        // Process the file stream and obtain the frame length, and then write the data to encode to the buffer of the specified index.
813        int32_t frameSize = width * height * 3 / 2; // Formula for calculating the data size of each frame in NV12 pixel format.
814        inputFile->read(reinterpret_cast<char *>(OH_AVBuffer_GetAddr(bufferInfo->buffer)), frameSize);
815    } else {
816        // If the stride is not equal to the width, perform offset based on the stride. For details, see the following example.
817    }
818    // Configure the buffer information.
819    OH_AVCodecBufferAttr info;
820    info.size = frameSize;
821    info.offset = 0;
822    info.pts = 0;
823    info.flags = flags;
824    int32_t ret = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info);
825    if (ret != AV_ERR_OK) {
826        // Handle exceptions.
827    }
828    // Configure the buffer frame-specific information.
829    // You can determine the value.
830    int32_t isIFrame;
831    OH_AVFormat *parameter = OH_AVBuffer_GetParameter(bufferInfo->buffer);
832    OH_AVFormat_SetIntValue(parameter, OH_MD_KEY_REQUEST_I_FRAME, isIFrame);
833    ret = OH_AVBuffer_SetParameter(bufferInfo->buffer, parameter);
834    if (ret != AV_ERR_OK) {
835        // Handle exceptions.
836    }
837    OH_AVFormat_Destroy(parameter);
838    // Send the data to the input buffer for encoding. index is the index of the buffer.
839    ret = OH_VideoEncoder_PushInputBuffer(videoEnc, bufferInfo->index);
840    if (ret != AV_ERR_OK) {
841        // Handle exceptions.
842    }
843    ```
844    Offset the stride. The following uses an NV12 image as an example, presenting the image layout of **width**, **height**, **wStride**, and **hStride**.
845
846    - **OH_MD_KEY_VIDEO_PIC_WIDTH** corresponds to **width**.
847    - **OH_MD_KEY_VIDEO_PIC_HEIGHT** corresponds to **height**.
848    - **OH_MD_KEY_VIDEO_STRIDE** corresponds to **wStride**.
849    - **OH_MD_KEY_VIDEO_SLICE_HEIGHT** corresponds to **hStride**.
850
851    ![copy by line](figures/copy-by-line.png)
852
853    Add the header file.
854
855    ```c++
856    #include <string.h>
857    ```
858
859    The following is the sample code:
860
861    ```c++
862    struct Rect // Width and height of the source buffer. You can set them as required.
863    {
864        int32_t width;
865        int32_t height;
866    };
867
868    struct DstRect // Width stride and height stride of the destination buffer. They are obtained by calling OH_VideoEncoder_GetInputDescription.
869    {
870        int32_t wStride;
871        int32_t hStride;
872    };
873
874    struct SrcRect // Width stride and height stride of the source buffer. You can set them as required.
875    {
876        int32_t wStride;
877        int32_t hStride;
878    };
879
880    Rect rect = {320, 240};
881    DstRect dstRect = {320, 256};
882    SrcRect srcRect = {320, 250};
883    uint8_t* dst = new uint8_t[dstRect.hStride * dstRect.wStride * 3 / 2]; // Pointer to the target memory area.
884    uint8_t* src = new uint8_t[srcRect.hStride * srcRect.wStride * 3 / 2]; // Pointer to the source memory area.
885    uint8_t* dstTemp = dst;
886    uint8_t* srcTemp = src;
887
888    // Y: Copy the source data in the Y region to the target data in another region.
889    for (int32_t i = 0; i < rect.height; ++i) {
890        // Copy a row of data from the source to a row of the target.
891        memcpy(dstTemp, srcTemp, rect.width);
892        // Update the pointers to the source data and target data to copy the next row. The pointers to the source data and target data are moved downwards by one wStride each time the source data and target data are updated.
893        dstTemp += dstRect.wStride;
894        srcTemp += srcRect.wStride;
895    }
896    // Padding.
897    // Update the pointers to the source data and target data. The pointers move downwards by one padding.
898    dstTemp += (dstRect.hStride - rect.height) * dstRect.wStride;
899    srcTemp += (srcRect.hStride - rect.height) * srcRect.wStride;
900    rect.height >>= 1;
901    // UV: Copy the source data in the UV region to the target data in another region.
902    for (int32_t i = 0; i < rect.height; ++i) {
903        memcpy(dstTemp, srcTemp, rect.width);
904        dstTemp += dstRect.wStride;
905        srcTemp += srcRect.wStride;
906    }
907
908    delete[] dst;
909    dst = nullptr;
910    delete[] src;
911    src = nullptr;
912    ```
913
914    When processing buffer data (before pushing data) during hardware encoding, you must copy the image data after width and height alignment to the input callback AVBuffer. Generally, copy the image width, height, stride, and pixel format to ensure correct processing of the data to encode. For details, see step 3 in [Buffer Input](#buffer-input).
915
9169. Notify the encoder of EOS.
917
918    In the code snippet below, the following variables are used:
919    - **index**: parameter passed by the callback function **OnNeedInputBuffer**, which uniquely corresponds to the buffer.
920    - **buffer**: parameter passed by the callback function **OnNeedInputBuffer**. You can obtain the pointer to the shared memory address by calling [OH_AVBuffer_GetAddr](../../reference/apis-avcodec-kit/_core.md#oh_avbuffer_getaddr).
921
922     The API **OH_VideoEncoder_PushInputBuffer** is used to notify the encoder of EOS. This API is also used in step 8 to push the stream to the input queue for encoding. Therefore, in the current step, you must pass in the **AVCODEC_BUFFER_FLAGS_EOS** flag.
923
924    ```c++
925    std::shared_ptr<CodecBufferInfo> bufferInfo = inQueue.Dequeue();
926    std::shared_lock<std::shared_mutex> lock(codecMutex);
927    if (bufferInfo == nullptr || !bufferInfo->isValid) {
928        // Handle exceptions.
929    }
930    OH_AVCodecBufferAttr info;
931    info.size = 0;
932    info.offset = 0;
933    info.pts = 0;
934    info.flags = AVCODEC_BUFFER_FLAGS_EOS;
935    int32_t ret = OH_AVBuffer_SetBufferAttr(bufferInfo->buffer, &info);
936    if (ret != AV_ERR_OK) {
937        // Handle exceptions.
938    }
939    ret = OH_VideoEncoder_PushInputBuffer(videoEnc, bufferInfo->index);
940    if (ret != AV_ERR_OK) {
941        // Handle exceptions.
942    }
943    ```
944
94510. Call **OH_VideoEncoder_FreeOutputBuffer()** to release encoded frames.
946
947    The procedure is the same as that in surface mode and is not described here.
948
949    ```c++
950    std::shared_ptr<CodecBufferInfo> bufferInfo = outQueue.Dequeue();
951    std::shared_lock<std::shared_mutex> lock(codecMutex);
952    if (bufferInfo == nullptr || !bufferInfo->isValid) {
953        // Handle exceptions.
954    }
955    // Obtain the encoded information.
956    OH_AVCodecBufferAttr info;
957    int32_t ret = OH_AVBuffer_GetBufferAttr(bufferInfo->buffer, &info);
958    if (ret != AV_ERR_OK) {
959        // Handle exceptions.
960    }
961    // Write the encoded frame data (specified by buffer) to the output file.
962    outputFile->write(reinterpret_cast<char *>(OH_AVBuffer_GetAddr(bufferInfo->buffer)), info.size);
963    // Free the output buffer. index is the index of the buffer.
964    ret = OH_VideoEncoder_FreeOutputBuffer(videoEnc, bufferInfo->index);
965    if (ret != AV_ERR_OK) {
966        // Handle exceptions.
967    }
968    ```
969
970The subsequent processes (including refreshing, resetting, stopping, and destroying the encoder) are the same as those in surface mode. For details, see steps 14–17 in [Surface Input](#surface-input).
971