• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# MindSpore
2
3
4## Overview
5
6Provides APIs related to MindSpore Lite model inference.
7
8**Since**: 9
9
10
11## Summary
12
13
14### Files
15
16| Name| Description|
17| -------- | -------- |
18| [context.h](context_8h.md) | Provides **Context** APIs for configuring runtime information.<br>File to include: &lt;mindspore/context.h&gt;<br>Library: libmindspore_lite_ndk.so|
19| [data_type.h](data__type_8h.md) | Declares tensor data types.<br>File to include: &lt;mindspore/data_type.h&gt;<br>Library: libmindspore_lite_ndk.so|
20| [format.h](format_8h.md) | Declares tensor data formats.<br>File to include: &lt;mindspore/format.h&gt;<br>Library: libmindspore_lite_ndk.so|
21| [model.h](model_8h.md) | Provides model-related APIs for model creation and inference.<br>File to include: &lt;mindspore/model.h&gt;<br>Library: libmindspore_lite_ndk.so|
22| [status.h](status_8h.md) | Provides the status codes of MindSpore Lite.<br>File to include: &lt;mindspore/status.h&gt;<br>Library: libmindspore_lite_ndk.so|
23| [tensor.h](tensor_8h.md) | Provides APIs for creating and modifying tensor information.<br>File to include: &lt;mindspore/tensor.h&gt;<br>Library: libmindspore_lite_ndk.so|
24| [types.h](types_8h.md) | Provides the model file types and device types supported by MindSpore Lite.<br>File to include: &lt;mindspore/types.h&gt;<br>Library: libmindspore_lite_ndk.so|
25
26
27### Structs
28
29| Name| Description|
30| -------- | -------- |
31| [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) | Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length.|
32| [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) | Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**.|
33| [OH_AI_CallBackParam](_o_h___a_i___call_back_param.md) | Defines the operator information passed in a callback.|
34
35
36### Macro Definition
37
38| Name| Description|
39| -------- | -------- |
40| [OH_AI_MAX_SHAPE_NUM](#oh_ai_max_shape_num) 32 | Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**.|
41
42
43### Types
44
45| Name| Description|
46| -------- | -------- |
47| [OH_AI_ContextHandle](#oh_ai_contexthandle) | Defines the pointer to the MindSpore context. |
48| [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) | Defines the pointer to the MindSpore device information.|
49| [OH_AI_DataType](#oh_ai_datatype) | Declares data types supported by MSTensor.|
50| [OH_AI_Format](#oh_ai_format) | Declares data formats supported by MSTensor.|
51| [OH_AI_ModelHandle](#oh_ai_modelhandle) | Defines the pointer to a model object.|
52| [OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) | Defines the pointer to a training configuration object.|
53| [OH_AI_TensorHandleArray](#oh_ai_tensorhandlearray) | Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length.|
54| [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) | Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**.|
55| [OH_AI_CallBackParam](#oh_ai_callbackparam) | Defines the operator information passed in a callback.|
56| [OH_AI_KernelCallBack](#oh_ai_kernelcallback)) (const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) inputs, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) outputs, const [OH_AI_CallBackParam](_o_h___a_i___call_back_param.md) kernel_Info) | Defines the pointer to a callback.|
57| [OH_AI_Status](#oh_ai_status) | Defines MindSpore status codes.|
58| [OH_AI_TensorHandle](#oh_ai_tensorhandle) | Defines the handle of a tensor object.|
59| [OH_AI_ModelType](#oh_ai_modeltype) | Defines model file types.|
60| [OH_AI_DeviceType](#oh_ai_devicetype) | Defines the supported device types.|
61| [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype) | Defines NNRt device types.|
62| [OH_AI_PerformanceMode](#oh_ai_performancemode) | Defines performance modes of the NNRt device.|
63| [OH_AI_Priority](#oh_ai_priority) | Defines NNRt inference task priorities.|
64| [OH_AI_OptimizationLevel](#oh_ai_optimizationlevel) | Defines training optimization levels.|
65| [OH_AI_QuantizationType](#oh_ai_quantizationtype) | Defines quantization types.|
66| [NNRTDeviceDesc](#nnrtdevicedesc) | Defines NNRt device information, including the device ID and device name.|
67
68
69### Enums
70
71| Name| Description|
72| -------- | -------- |
73| [OH_AI_DataType](#oh_ai_datatype) {<br>OH_AI_DATATYPE_UNKNOWN = 0, OH_AI_DATATYPE_OBJECTTYPE_STRING = 12, OH_AI_DATATYPE_OBJECTTYPE_LIST = 13, OH_AI_DATATYPE_OBJECTTYPE_TUPLE = 14,<br>OH_AI_DATATYPE_OBJECTTYPE_TENSOR = 17, OH_AI_DATATYPE_NUMBERTYPE_BEGIN = 29, OH_AI_DATATYPE_NUMBERTYPE_BOOL = 30, OH_AI_DATATYPE_NUMBERTYPE_INT8 = 32,<br>OH_AI_DATATYPE_NUMBERTYPE_INT16 = 33, OH_AI_DATATYPE_NUMBERTYPE_INT32 = 34, OH_AI_DATATYPE_NUMBERTYPE_INT64 = 35, OH_AI_DATATYPE_NUMBERTYPE_UINT8 = 37,<br>OH_AI_DATATYPE_NUMBERTYPE_UINT16 = 38, OH_AI_DATATYPE_NUMBERTYPE_UINT32 = 39, OH_AI_DATATYPE_NUMBERTYPE_UINT64 = 40, OH_AI_DATATYPE_NUMBERTYPE_FLOAT16 = 42,<br>OH_AI_DATATYPE_NUMBERTYPE_FLOAT32 = 43, OH_AI_DATATYPE_NUMBERTYPE_FLOAT64 = 44, OH_AI_DATATYPE_NUMBERTYPE_END = 46, OH_AI_DataTypeInvalid = INT32_MAX<br>} | Declares data types supported by MSTensor.|
74| [OH_AI_Format](#oh_ai_format) {<br>OH_AI_FORMAT_NCHW = 0, OH_AI_FORMAT_NHWC = 1, OH_AI_FORMAT_NHWC4 = 2, OH_AI_FORMAT_HWKC = 3,<br>OH_AI_FORMAT_HWCK = 4, OH_AI_FORMAT_KCHW = 5, OH_AI_FORMAT_CKHW = 6, OH_AI_FORMAT_KHWC = 7,<br>OH_AI_FORMAT_CHWK = 8, OH_AI_FORMAT_HW = 9, OH_AI_FORMAT_HW4 = 10, OH_AI_FORMAT_NC = 11,<br>OH_AI_FORMAT_NC4 = 12, OH_AI_FORMAT_NC4HW4 = 13, OH_AI_FORMAT_NCDHW = 15, OH_AI_FORMAT_NWC = 16,<br>OH_AI_FORMAT_NCW = 17<br>} | Declares data formats supported by MSTensor.|
75| [OH_AI_CompCode](#oh_ai_compcode) { <br>OH_AI_COMPCODE_CORE = 0x00000000u, <br>OH_AI_COMPCODE_MD = 0x10000000u, <br>OH_AI_COMPCODE_ME = 0x20000000u, <br>OH_AI_COMPCODE_MC = 0x30000000u, <br>OH_AI_COMPCODE_LITE = 0xF0000000u<br> } | Defines MindSpore component codes.
76| [OH_AI_Status](#oh_ai_status) {<br>OH_AI_STATUS_SUCCESS = 0, OH_AI_STATUS_CORE_FAILED = OH_AI_COMPCODE_CORE \| 0x1, OH_AI_STATUS_LITE_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -1), OH_AI_STATUS_LITE_NULLPTR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -2),<br>OH_AI_STATUS_LITE_PARAM_INVALID = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -3), OH_AI_STATUS_LITE_NO_CHANGE = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -4), OH_AI_STATUS_LITE_SUCCESS_EXIT = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -5), OH_AI_STATUS_LITE_MEMORY_FAILED = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -6),<br>OH_AI_STATUS_LITE_NOT_SUPPORT = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -7), OH_AI_STATUS_LITE_THREADPOOL_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -8), OH_AI_STATUS_LITE_UNINITIALIZED_OBJ = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -9), OH_AI_STATUS_LITE_OUT_OF_TENSOR_RANGE = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -100),<br>OH_AI_STATUS_LITE_INPUT_TENSOR_ERROR, OH_AI_STATUS_LITE_REENTRANT_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -102), OH_AI_STATUS_LITE_GRAPH_FILE_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -200), OH_AI_STATUS_LITE_NOT_FIND_OP = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -300),<br>OH_AI_STATUS_LITE_INVALID_OP_NAME = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -301), OH_AI_STATUS_LITE_INVALID_OP_ATTR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -302), OH_AI_STATUS_LITE_OP_EXECUTE_FAILURE, OH_AI_STATUS_LITE_FORMAT_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -400),<br>OH_AI_STATUS_LITE_INFER_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -500), OH_AI_STATUS_LITE_INFER_INVALID, OH_AI_STATUS_LITE_INPUT_PARAM_INVALID<br>} | Defines MindSpore status codes.|
77| [OH_AI_ModelType](#oh_ai_modeltype) { OH_AI_MODELTYPE_MINDIR = 0, OH_AI_MODELTYPE_INVALID = 0xFFFFFFFF } | Defines model file types.|
78| [OH_AI_DeviceType](#oh_ai_devicetype) {<br>OH_AI_DEVICETYPE_CPU = 0, OH_AI_DEVICETYPE_GPU, OH_AI_DEVICETYPE_KIRIN_NPU, OH_AI_DEVICETYPE_NNRT = 60,<br>OH_AI_DEVICETYPE_INVALID = 100<br>} | Defines the supported device types.|
79| [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype) { OH_AI_NNRTDEVICE_OTHERS = 0, OH_AI_NNRTDEVICE_CPU = 1, OH_AI_NNRTDEVICE_GPU = 2, OH_AI_NNRTDEVICE_ACCELERATOR = 3 } | Defines NNRt device types.|
80| [OH_AI_PerformanceMode](#oh_ai_performancemode) {<br>OH_AI_PERFORMANCE_NONE = 0, OH_AI_PERFORMANCE_LOW = 1, OH_AI_PERFORMANCE_MEDIUM = 2, OH_AI_PERFORMANCE_HIGH = 3,<br>OH_AI_PERFORMANCE_EXTREME = 4<br>} | Defines performance modes of the NNRt device.|
81| [OH_AI_Priority](#oh_ai_priority) { OH_AI_PRIORITY_NONE = 0, OH_AI_PRIORITY_LOW = 1, OH_AI_PRIORITY_MEDIUM = 2, OH_AI_PRIORITY_HIGH = 3 } | Defines NNRt inference task priorities.|
82| [OH_AI_OptimizationLevel](#oh_ai_optimizationlevel) {<br>OH_AI_KO0 = 0, OH_AI_KO2 = 2, OH_AI_KO3 = 3, OH_AI_KAUTO = 4,<br>OH_AI_KOPTIMIZATIONTYPE = 0xFFFFFFFF<br>} | Defines training optimization levels.|
83| [OH_AI_QuantizationType](#oh_ai_quantizationtype) { OH_AI_NO_QUANT = 0, OH_AI_WEIGHT_QUANT = 1, OH_AI_FULL_QUANT = 2, OH_AI_UNKNOWN_QUANT_TYPE = 0xFFFFFFFF } | Defines quantization types.|
84
85
86### Functions
87
88| Name| Description|
89| -------- | -------- |
90| [OH_AI_ContextCreate](#oh_ai_contextcreate) () | Creates a context object.|
91| [OH_AI_ContextDestroy](#oh_ai_contextdestroy) ([OH_AI_ContextHandle](#oh_ai_contexthandle) \*context) | Destroys a context object.|
92| [OH_AI_ContextSetThreadNum](#oh_ai_contextsetthreadnum) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, int32_t thread_num) | Sets the number of runtime threads.|
93| [OH_AI_ContextGetThreadNum](#oh_ai_contextgetthreadnum) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Obtains the number of threads.|
94| [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, int mode) | Sets the affinity mode for binding runtime threads to CPU cores, which are classified into large, medium, and small cores based on the CPU frequency. You only need to bind the large or medium cores, but not small cores.|
95| [OH_AI_ContextGetThreadAffinityMode](#oh_ai_contextgetthreadaffinitymode) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Obtains the affinity mode for binding runtime threads to CPU cores.|
96| [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, const int32_t \*core_list, size_t core_num) | Sets the list of CPU cores bound to a runtime thread.|
97| [OH_AI_ContextGetThreadAffinityCoreList](#oh_ai_contextgetthreadaffinitycorelist) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context, size_t \*core_num) | Obtains the list of bound CPU cores.|
98| [OH_AI_ContextSetEnableParallel](#oh_ai_contextsetenableparallel) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, bool is_parallel) | Sets whether to enable parallelism between operators. The setting is ineffective because the feature of this API is not yet available.|
99| [OH_AI_ContextGetEnableParallel](#oh_ai_contextgetenableparallel) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Checks whether parallelism between operators is supported.|
100| [OH_AI_ContextAddDeviceInfo](#oh_ai_contextadddeviceinfo) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Attaches the custom device information to the inference context.|
101| [OH_AI_DeviceInfoCreate](#oh_ai_deviceinfocreate) ([OH_AI_DeviceType](#oh_ai_devicetype) device_type) | Creates a device information object.|
102| [OH_AI_DeviceInfoDestroy](#oh_ai_deviceinfodestroy) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) \*device_info) | Destroys a device information object. Note: After the device information instance is added to the context, the caller does not need to destroy it manually.|
103| [OH_AI_DeviceInfoSetProvider](#oh_ai_deviceinfosetprovider) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*provider) | Sets the name of the provider.|
104| [OH_AI_DeviceInfoGetProvider](#oh_ai_deviceinfogetprovider) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the provider name.|
105| [OH_AI_DeviceInfoSetProviderDevice](#oh_ai_deviceinfosetproviderdevice) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*device) | Sets the name of a provider device.|
106| [OH_AI_DeviceInfoGetProviderDevice](#oh_ai_deviceinfogetproviderdevice) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the name of a provider device.|
107| [OH_AI_DeviceInfoGetDeviceType](#oh_ai_deviceinfogetdevicetype) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the device type.|
108| [OH_AI_DeviceInfoSetEnableFP16](#oh_ai_deviceinfosetenablefp16) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, bool is_fp16) | Sets whether to enable float16 inference. This function is available only for CPU and GPU devices.|
109| [OH_AI_DeviceInfoGetEnableFP16](#oh_ai_deviceinfogetenablefp16) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Checks whether float16 inference is enabled. This function is available only for CPU and GPU devices.|
110| [OH_AI_DeviceInfoSetFrequency](#oh_ai_deviceinfosetfrequency) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, int frequency) | Sets the NPU frequency type. This function is available only for NPU devices.|
111| [OH_AI_DeviceInfoGetFrequency](#oh_ai_deviceinfogetfrequency) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the NPU frequency type. This function is available only for NPU devices.|
112| [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs) (size_t \*num) | Obtains the descriptions of all NNRt devices in the system.|
113| [OH_AI_GetElementOfNNRTDeviceDescs](#oh_ai_getelementofnnrtdevicedescs) ([NNRTDeviceDesc](#nnrtdevicedesc) \*descs, size_t index) | Obtains the pointer to an element in the NNRt device description array.|
114| [OH_AI_DestroyAllNNRTDeviceDescs](#oh_ai_destroyallnnrtdevicedescs) ([NNRTDeviceDesc](#nnrtdevicedesc) \*\*desc) | Destroys the NNRt device description array obtained by [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs).|
115| [OH_AI_GetDeviceIdFromNNRTDeviceDesc](#oh_ai_getdeviceidfromnnrtdevicedesc) (const [NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device ID from the specified NNRt device description. Note that this ID is valid only for NNRt devices.|
116| [OH_AI_GetNameFromNNRTDeviceDesc](#oh_ai_getnamefromnnrtdevicedesc) (const [NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device name from the specified NNRt device description.|
117| [OH_AI_GetTypeFromNNRTDeviceDesc](#oh_ai_gettypefromnnrtdevicedesc) (const [NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device type from the specified NNRt device description.|
118| [OH_AI_CreateNNRTDeviceInfoByName](#oh_ai_creatennrtdeviceinfobyname) (const char \*name) | Searches for the NNRt device with the specified name and creates the NNRt device information based on the information about the first found NNRt device.|
119| [OH_AI_CreateNNRTDeviceInfoByType](#oh_ai_creatennrtdeviceinfobytype) ([OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype) type) | Searches for the NNRt device with the specified type and creates the NNRt device information based on the information about the first found NNRt device.|
120| [OH_AI_DeviceInfoSetDeviceId](#oh_ai_deviceinfosetdeviceid) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, size_t device_id) | Sets the ID of an NNRt device. This function is available only for NNRt devices.|
121| [OH_AI_DeviceInfoGetDeviceId](#oh_ai_deviceinfogetdeviceid) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the ID of an NNRt device. This function is available only for NNRt devices.|
122| [OH_AI_DeviceInfoSetPerformanceMode](#oh_ai_deviceinfosetperformancemode) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, [OH_AI_PerformanceMode](#oh_ai_performancemode) mode) | Sets the performance mode of an NNRt device. This function is available only for NNRt devices.|
123| [OH_AI_DeviceInfoGetPerformanceMode](#oh_ai_deviceinfogetperformancemode) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the performance mode of an NNRt device. This function is available only for NNRt devices.|
124| [OH_AI_DeviceInfoSetPriority](#oh_ai_deviceinfosetpriority) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, [OH_AI_Priority](#oh_ai_priority) priority) | Sets the priority of an NNRt task. This function is available only for NNRt devices.|
125| [OH_AI_DeviceInfoGetPriority](#oh_ai_deviceinfogetpriority) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the priority of an NNRt task. This function is available only for NNRt devices.|
126| [OH_AI_DeviceInfoAddExtension](#oh_ai_deviceinfoaddextension) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*name, const char \*value, size_t value_size) | Adds extended configuration in the form of key/value pairs to the device information. This function is available only for NNRt device information.|
127| [OH_AI_ModelCreate](#oh_ai_modelcreate) () | Creates a model object.|
128| [OH_AI_ModelDestroy](#oh_ai_modeldestroy) ([OH_AI_ModelHandle](#oh_ai_modelhandle) \*model) | Destroys a model object.|
129| [OH_AI_ModelBuild](#oh_ai_modelbuild) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const void \*model_data, size_t data_size, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context) | Loads and builds a MindSpore model from the memory buffer.|
130| [OH_AI_ModelBuildFromFile](#oh_ai_modelbuildfromfile) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*model_path, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context) | Loads and builds a MindSpore model from a model file.|
131| [OH_AI_ModelResize](#oh_ai_modelresize) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) inputs, [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) \*shape_infos, size_t shape_info_num) | Adjusts the input tensor shapes of a built model.|
132| [OH_AI_ModelPredict](#oh_ai_modelpredict) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) inputs, [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) \*outputs, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) before, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) after) | Performs model inference.|
133| [OH_AI_ModelGetInputs](#oh_ai_modelgetinputs) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the input tensor array structure of a model.|
134| [OH_AI_ModelGetOutputs](#oh_ai_modelgetoutputs) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the output tensor array structure of a model.|
135| [OH_AI_ModelGetInputByTensorName](#oh_ai_modelgetinputbytensorname) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*tensor_name) | Obtains the input tensor of a model by tensor name.|
136| [OH_AI_ModelGetOutputByTensorName](#oh_ai_modelgetoutputbytensorname) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*tensor_name) | Obtains the output tensor of a model by tensor name.|
137| [OH_AI_TrainCfgCreate](#oh_ai_traincfgcreate) () | Creates the pointer to the training configuration object. This API is used only for on-device training.|
138| [OH_AI_TrainCfgDestroy](#oh_ai_traincfgdestroy) ([OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) \*train_cfg) | Destroys the pointer to the training configuration object. This API is used only for on-device training.|
139| [OH_AI_TrainCfgGetLossName](#oh_ai_traincfggetlossname) ([OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg, size_t \*num) | Obtains the list of loss functions, which are used only for on-device training.|
140| [OH_AI_TrainCfgSetLossName](#oh_ai_traincfgsetlossname) ([OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg, const char \*\*loss_name, size_t num) | Sets the list of loss functions, which are used only for on-device training.|
141| [OH_AI_TrainCfgGetOptimizationLevel](#oh_ai_traincfggetoptimizationlevel) ([OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg) | Obtains the optimization level of the training configuration object. This API is used only for on-device training.|
142| [OH_AI_TrainCfgSetOptimizationLevel](#oh_ai_traincfgsetoptimizationlevel) ([OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg, [OH_AI_OptimizationLevel](#oh_ai_optimizationlevel) level) | Sets the optimization level of the training configuration object. This API is used only for on-device training.|
143| [OH_AI_TrainModelBuild](#oh_ai_trainmodelbuild) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const void \*model_data, size_t data_size, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context, const [OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg) | Loads a training model from the memory buffer and compiles the model to a state ready for running on the device. This API is used only for on-device training.|
144| [OH_AI_TrainModelBuildFromFile](#oh_ai_trainmodelbuildfromfile) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*model_path, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context, const [OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg) | Loads the training model from the specified path and compiles the model to a state ready for running on the device. This API is used only for on-device training.|
145| [OH_AI_RunStep](#oh_ai_runstep) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) before, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) after) | Defines a single-step training model. This API is used only for on-device training.|
146| [OH_AI_ModelSetLearningRate](#oh_ai_modelsetlearningrate) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, float learning_rate) | Sets the learning rate for model training. This API is used only for on-device training.|
147| [OH_AI_ModelGetLearningRate](#oh_ai_modelgetlearningrate) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the learning rate for model training. This API is used only for on-device training.|
148| [OH_AI_ModelGetWeights](#oh_ai_modelgetweights) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains all weight tensors of a model. This API is used only for on-device training.|
149| [OH_AI_ModelUpdateWeights](#oh_ai_modelupdateweights) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) new_weights) | Updates the weight tensors of a model. This API is used only for on-device training.|
150| [OH_AI_ModelGetTrainMode](#oh_ai_modelgettrainmode) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the training mode.|
151| [OH_AI_ModelSetTrainMode](#oh_ai_modelsettrainmode) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, bool train) | Sets the training mode. This API is used only for on-device training.|
152| [OH_AI_ModelSetupVirtualBatch](#oh_ai_modelsetupvirtualbatch) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, int virtual_batch_multiplier, float lr, float momentum) | OH_AI_API [OH_AI_Status](#oh_ai_status)<br>Sets the virtual batch for training. This API is used only for on-device training.|
153| [OH_AI_ExportModel](#oh_ai_exportmodel) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const char \*model_file, [OH_AI_QuantizationType](#oh_ai_quantizationtype) quantization_type, bool export_inference_only, char \*\*output_tensor_name, size_t num) | Exports a training model. This API is used only for on-device training.|
154| [OH_AI_ExportModelBuffer](#oh_ai_exportmodelbuffer) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, [OH_AI_ModelType](#oh_ai_modeltype) model_type, char \*\*model_data, size_t \*data_size, [OH_AI_QuantizationType](#oh_ai_quantizationtype) quantization_type, bool export_inference_only, char \*\*output_tensor_name, size_t num) | Exports the memory cache of the training model. This API is used only for on-device training. |
155| [OH_AI_ExportWeightsCollaborateWithMicro](#oh_ai_exportweightscollaboratewithmicro) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const char \*weight_file, bool is_inference, bool enable_fp16, char \*\*changeable_weights_name, size_t num) | Exports the weight file of the training model for micro inference. This API is used only for on-device training.|
156| [OH_AI_TensorCreate](#oh_ai_tensorcreate) (const char \*name, [OH_AI_DataType](#oh_ai_datatype) type, const int64_t \*shape, size_t shape_num, const void \*data, size_t data_len) | Creates a tensor object.|
157| [OH_AI_TensorDestroy](#oh_ai_tensordestroy) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) \*tensor) | Destroys a tensor object.|
158| [OH_AI_TensorClone](#oh_ai_tensorclone) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Clones a tensor.|
159| [OH_AI_TensorSetName](#oh_ai_tensorsetname) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, const char \*name) | Sets the tensor name.|
160| [OH_AI_TensorGetName](#oh_ai_tensorgetname) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor name.|
161| [OH_AI_TensorSetDataType](#oh_ai_tensorsetdatatype) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, [OH_AI_DataType](#oh_ai_datatype) type) | Sets the data type of a tensor.|
162| [OH_AI_TensorGetDataType](#oh_ai_tensorgetdatatype) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor type.|
163| [OH_AI_TensorSetShape](#oh_ai_tensorsetshape) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, const int64_t \*shape, size_t shape_num) | Sets the tensor shape.|
164| [OH_AI_TensorGetShape](#oh_ai_tensorgetshape) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, size_t \*shape_num) | Obtains the tensor shape.|
165| [OH_AI_TensorSetFormat](#oh_ai_tensorsetformat) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, [OH_AI_Format](#oh_ai_format) format) | Sets the tensor data format.|
166| [OH_AI_TensorGetFormat](#oh_ai_tensorgetformat) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor data format.|
167| [OH_AI_TensorSetData](#oh_ai_tensorsetdata) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, void \*data) | Sets the tensor data.|
168| [OH_AI_TensorGetData](#oh_ai_tensorgetdata) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the pointer to tensor data.|
169| [OH_AI_TensorGetMutableData](#oh_ai_tensorgetmutabledata) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the pointer to variable tensor data. If the data is empty, memory will be allocated.|
170| [OH_AI_TensorGetElementNum](#oh_ai_tensorgetelementnum) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the number of tensor elements.|
171| [OH_AI_TensorGetDataSize](#oh_ai_tensorgetdatasize) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the number of bytes of the tensor data.|
172| [OH_AI_TensorSetUserData](#oh_ai_tensorsetuserdata) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, void \*data, size_t data_size) | Sets the tensor as the user data. This function allows you to reuse user data as the model input, which helps to reduce data copy by one time. > **NOTE**<br>The user data is type of external data for the tensor and is not automatically released when the tensor is destroyed. The caller needs to release the data separately. In addition, the caller must ensure that the user data is valid during use of the tensor.|
173
174
175## Macro Description
176
177
178### OH_AI_MAX_SHAPE_NUM
179
180```
181#define OH_AI_MAX_SHAPE_NUM   32
182```
183
184**Description**
185
186Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**.
187
188**Since**: 9
189
190
191## Type Description
192
193
194### NNRTDeviceDesc
195
196```
197typedef struct NNRTDeviceDesc NNRTDeviceDesc
198```
199
200**Description**
201
202Defines the NNRt device information, including the device ID and device name.
203
204**Since**: 10
205
206
207### OH_AI_CallBackParam
208
209```
210typedef struct OH_AI_CallBackParamOH_AI_CallBackParam
211```
212
213**Description**
214
215Defines the operator information passed in a callback.
216
217**Since**: 9
218
219
220### OH_AI_ContextHandle
221
222```
223typedef void* OH_AI_ContextHandle
224```
225
226**Description**
227
228Defines the pointer to the MindSpore context.
229
230**Since**: 9
231
232
233### OH_AI_DataType
234
235```
236typedef enum OH_AI_DataTypeOH_AI_DataType
237```
238
239**Description**
240
241Declares data types supported by MSTensor.
242
243**Since**: 9
244
245
246### OH_AI_DeviceInfoHandle
247
248```
249typedef void* OH_AI_DeviceInfoHandle
250```
251
252**Description**
253
254Defines the pointer to the MindSpore device information.
255
256**Since**: 9
257
258
259### OH_AI_DeviceType
260
261```
262typedef enum OH_AI_DeviceTypeOH_AI_DeviceType
263```
264
265**Description**
266
267Defines the supported device types.
268
269**Since**: 9
270
271
272### OH_AI_Format
273
274```
275typedef enum OH_AI_FormatOH_AI_Format
276```
277
278**Description**
279
280Declares data formats supported by MSTensor.
281
282**Since**: 9
283
284
285### OH_AI_KernelCallBack
286
287```
288typedef bool(* OH_AI_KernelCallBack) (const OH_AI_TensorHandleArray inputs, const OH_AI_TensorHandleArray outputs, const OH_AI_CallBackParam kernel_Info)
289```
290
291**Description**
292
293Defines the pointer to a callback.
294
295This pointer is used to set the two callback functions in [OH_AI_ModelPredict](#oh_ai_modelpredict). Each callback function must contain three parameters, where **inputs** and **outputs** indicate the input and output tensors of the operator, and **kernel_Info** indicates information about the current operator. You can use the callback functions to monitor the operator execution status, for example, operator execution time and the operator correctness.
296
297**Since**: 9
298
299
300### OH_AI_ModelHandle
301
302```
303typedef void* OH_AI_ModelHandle
304```
305
306**Description**
307
308Defines the pointer to a model object.
309
310**Since**: 9
311
312
313### OH_AI_ModelType
314
315```
316typedef enum OH_AI_ModelTypeOH_AI_ModelType
317```
318
319**Description**
320
321Defines model file types.
322
323**Since**: 9
324
325
326### OH_AI_NNRTDeviceType
327
328```
329typedef enum OH_AI_NNRTDeviceTypeOH_AI_NNRTDeviceType
330```
331
332**Description**
333
334Defines NNRt device types.
335
336**Since**: 10
337
338
339### OH_AI_PerformanceMode
340
341```
342typedef enum OH_AI_PerformanceModeOH_AI_PerformanceMode
343```
344
345**Description**
346
347Defines performance modes of the NNRt device.
348
349**Since**: 10
350
351
352### OH_AI_Priority
353
354```
355typedef enum OH_AI_PriorityOH_AI_Priority
356```
357
358**Description**
359
360Defines NNRt inference task priorities.
361
362**Since**: 10
363
364
365### OH_AI_Status
366
367```
368typedef enum OH_AI_StatusOH_AI_Status
369```
370
371**Description**
372
373Defines MindSpore status codes.
374
375**Since**: 9
376
377
378### OH_AI_TensorHandle
379
380```
381typedef void* OH_AI_TensorHandle
382```
383
384**Description**
385
386Defines the handle of a tensor object.
387
388**Since**: 9
389
390
391### OH_AI_TensorHandleArray
392
393```
394typedef struct OH_AI_TensorHandleArrayOH_AI_TensorHandleArray
395```
396
397**Description**
398
399Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length.
400
401**Since**: 9
402
403
404### OH_AI_TrainCfgHandle
405
406```
407typedef void* OH_AI_TrainCfgHandle
408```
409
410**Description**
411
412Defines the pointer to a training configuration object.
413
414**Since**: 11
415
416
417## Enum Description
418
419
420### OH_AI_CompCode
421
422```
423enum OH_AI_CompCode
424```
425
426**Description**
427
428Defines MindSpore component codes.
429
430**Since**: 9
431
432| Value| Description|
433| -------- | -------- |
434| OH_AI_COMPCODE_CORE | MindSpore Core code.|
435| OH_AI_COMPCODE_MD   | MindSpore MindData code.|
436| OH_AI_COMPCODE_ME   | MindSpore MindExpression code.|
437| OH_AI_COMPCODE_MC   | MindSpore code.|
438| OH_AI_COMPCODE_LITE | MindSpore Lite code.|
439
440
441### OH_AI_DataType
442
443```
444enum OH_AI_DataType
445```
446
447**Description**
448
449Declares data types supported by MSTensor.
450
451**Since**: 9
452
453| Value| Description|
454| -------- | -------- |
455| OH_AI_DATATYPE_UNKNOWN | Unknown data type.|
456| OH_AI_DATATYPE_OBJECTTYPE_STRING | String data.|
457| OH_AI_DATATYPE_OBJECTTYPE_LIST | List data.|
458| OH_AI_DATATYPE_OBJECTTYPE_TUPLE | Tuple data.|
459| OH_AI_DATATYPE_OBJECTTYPE_TENSOR | TensorList data.|
460| OH_AI_DATATYPE_NUMBERTYPE_BEGIN | Beginning of the number type.|
461| OH_AI_DATATYPE_NUMBERTYPE_BOOL | Bool data.|
462| OH_AI_DATATYPE_NUMBERTYPE_INT8 | Int8 data.|
463| OH_AI_DATATYPE_NUMBERTYPE_INT16 | Int16 data.|
464| OH_AI_DATATYPE_NUMBERTYPE_INT32 | Int32 data.|
465| OH_AI_DATATYPE_NUMBERTYPE_INT64 | Int64 data.|
466| OH_AI_DATATYPE_NUMBERTYPE_UINT8 | UInt8 data.|
467| OH_AI_DATATYPE_NUMBERTYPE_UINT16 | UInt16 data.|
468| OH_AI_DATATYPE_NUMBERTYPE_UINT32 | UInt32 data.|
469| OH_AI_DATATYPE_NUMBERTYPE_UINT64 | UInt64 data.|
470| OH_AI_DATATYPE_NUMBERTYPE_FLOAT16 | Float16 data.|
471| OH_AI_DATATYPE_NUMBERTYPE_FLOAT32 | Float32 data.|
472| OH_AI_DATATYPE_NUMBERTYPE_FLOAT64 | Float64 data.|
473| OH_AI_DATATYPE_NUMBERTYPE_END | End of the number type.|
474| OH_AI_DataTypeInvalid | Invalid data type.|
475
476
477### OH_AI_DeviceType
478
479```
480enum OH_AI_DeviceType
481```
482
483**Description**
484
485Defines the supported device types.
486
487**Since**: 9
488
489| Value| Description|
490| -------- | -------- |
491| OH_AI_DEVICETYPE_CPU | Device type: CPU|
492| OH_AI_DEVICETYPE_GPU | Device type: GPU|
493| OH_AI_DEVICETYPE_KIRIN_NPU | Device type: Kirin NPU|
494| OH_AI_DEVICETYPE_NNRT | Device type: NNRt<br>OHOS device range: [60, 80)|
495| OH_AI_DEVICETYPE_INVALID | Invalid device type|
496
497
498### OH_AI_Format
499
500```
501enum OH_AI_Format
502```
503
504**Description**
505
506Declares data formats supported by MSTensor.
507
508**Since**: 9
509
510| Value| Description|
511| -------- | -------- |
512| OH_AI_FORMAT_NCHW | Tensor data is stored in the sequence of batch number N, channel C, height H, and width W.|
513| OH_AI_FORMAT_NHWC | Tensor data is stored in the sequence of batch number N, height H, width W, and channel C.|
514| OH_AI_FORMAT_NHWC4 | Tensor data is stored in the sequence of batch number N, height H, width W, and channel C. The C axis is 4-byte aligned.|
515| OH_AI_FORMAT_HWKC | Tensor data is stored in the sequence of height H, width W, core count K, and channel C.|
516| OH_AI_FORMAT_HWCK | Tensor data is stored in the sequence of height H, width W, channel C, and core count K.|
517| OH_AI_FORMAT_KCHW | Tensor data is stored in the sequence of core count K, channel C, height H, and width W.|
518| OH_AI_FORMAT_CKHW | Tensor data is stored in the sequence of channel C, core count K, height H, and width W.|
519| OH_AI_FORMAT_KHWC | Tensor data is stored in the sequence of core count K, height H, width W, and channel C.|
520| OH_AI_FORMAT_CHWK | Tensor data is stored in the sequence of channel C, height H, width W, and core count K.|
521| OH_AI_FORMAT_HW | Tensor data is stored in the sequence of height H and width W.|
522| OH_AI_FORMAT_HW4 | Tensor data is stored in the sequence of height H and width W. The W axis is 4-byte aligned.|
523| OH_AI_FORMAT_NC | Tensor data is stored in the sequence of batch number N and channel C.|
524| OH_AI_FORMAT_NC4 | Tensor data is stored in the sequence of batch number N and channel C. The C axis is 4-byte aligned.|
525| OH_AI_FORMAT_NC4HW4 | Tensor data is stored in the sequence of batch number N, channel C, height H, and width W. The C axis and W axis are 4-byte aligned.|
526| OH_AI_FORMAT_NCDHW | Tensor data is stored in the sequence of batch number N, channel C, depth D, height H, and width W.|
527| OH_AI_FORMAT_NWC | Tensor data is stored in the sequence of batch number N, width W, and channel C.|
528| OH_AI_FORMAT_NCW | Tensor data is stored in the sequence of batch number N, channel C, and width W.|
529
530
531### OH_AI_ModelType
532
533```
534enum OH_AI_ModelType
535```
536
537**Description**
538
539Defines model file types.
540
541**Since**: 9
542
543| Value| Description|
544| -------- | -------- |
545| OH_AI_MODELTYPE_MINDIR | Model type of MindIR. The extension of the model file name is **.ms**.|
546| OH_AI_MODELTYPE_INVALID | Invalid model type.|
547
548
549### OH_AI_NNRTDeviceType
550
551```
552enum OH_AI_NNRTDeviceType
553```
554
555**Description**
556
557Defines NNRt device types.
558
559**Since**: 10
560
561| Value| Description|
562| -------- | -------- |
563| OH_AI_NNRTDEVICE_OTHERS | Others (any device type except the following three types).|
564| OH_AI_NNRTDEVICE_CPU | CPU.|
565| OH_AI_NNRTDEVICE_GPU | GPU.|
566| OH_AI_NNRTDEVICE_ACCELERATOR | Specific acceleration device.|
567
568
569### OH_AI_OptimizationLevel
570
571```
572enum OH_AI_OptimizationLevel
573```
574
575**Description**
576
577Defines training optimization levels.
578
579**Since**
580
581**11**
582
583| Value| Description|
584| -------- | -------- |
585| OH_AI_KO0 | No optimization level.|
586| OH_AI_KO2 | Converts the precision type of the network to float16 and keeps the precision type of the batch normalization layer and loss function as float32.|
587| OH_AI_KO3 | Converts the precision type of the network (including the batch normalization layer) to float16.|
588| OH_AI_KAUTO | Selects an optimization level based on the device.|
589| OH_AI_KOPTIMIZATIONTYPE | Invalid optimization level.|
590
591
592### OH_AI_PerformanceMode
593
594```
595enum OH_AI_PerformanceMode
596```
597
598**Description**
599
600Defines performance modes of the NNRt device.
601
602**Since**: 10
603
604| Value| Description|
605| -------- | -------- |
606| OH_AI_PERFORMANCE_NONE | No special settings.|
607| OH_AI_PERFORMANCE_LOW | Low power consumption.|
608| OH_AI_PERFORMANCE_MEDIUM | Power consumption and performance balancing.|
609| OH_AI_PERFORMANCE_HIGH | High performance.|
610| OH_AI_PERFORMANCE_EXTREME | Ultimate performance.|
611
612
613### OH_AI_Priority
614
615```
616enum OH_AI_Priority
617```
618
619**Description**
620
621Defines NNRt inference task priorities.
622
623**Since**: 10
624
625| Value| Description|
626| -------- | -------- |
627| OH_AI_PRIORITY_NONE | No priority preference.|
628| OH_AI_PRIORITY_LOW | Low priority.|
629| OH_AI_PRIORITY_MEDIUM | Medium priority.|
630| OH_AI_PRIORITY_HIGH | High priority.|
631
632
633### OH_AI_QuantizationType
634
635```
636enum OH_AI_QuantizationType
637```
638
639**Description**
640
641Defines quantization types.
642
643**Since**
644
645**11**
646
647| Value| Description|
648| -------- | -------- |
649| OH_AI_NO_QUANT | No quantification.|
650| OH_AI_WEIGHT_QUANT | Weight quantization.|
651| OH_AI_FULL_QUANT | Full quantization.|
652| OH_AI_UNKNOWN_QUANT_TYPE | Invalid quantization type.|
653
654
655### OH_AI_Status
656
657```
658enum OH_AI_Status
659```
660
661**Description**
662
663Defines MindSpore status codes.
664
665**Since**: 9
666
667| Value| Description|
668| -------- | -------- |
669| OH_AI_STATUS_SUCCESS | Success.|
670| OH_AI_STATUS_CORE_FAILED | MindSpore Core failure.|
671| OH_AI_STATUS_LITE_ERROR | MindSpore Lite error.|
672| OH_AI_STATUS_LITE_NULLPTR | MindSpore Lite null pointer.|
673| OH_AI_STATUS_LITE_PARAM_INVALID | MindSpore Lite invalid parameters.|
674| OH_AI_STATUS_LITE_NO_CHANGE | MindSpore Lite no change.|
675| OH_AI_STATUS_LITE_SUCCESS_EXIT | MindSpore Lite exit without errors.|
676| OH_AI_STATUS_LITE_MEMORY_FAILED | MindSpore Lite memory allocation failure.|
677| OH_AI_STATUS_LITE_NOT_SUPPORT | MindSpore Lite function not supported.|
678| OH_AI_STATUS_LITE_THREADPOOL_ERROR | MindSpore Lite thread pool error.|
679| OH_AI_STATUS_LITE_UNINITIALIZED_OBJ | MindSpore Lite uninitialized.|
680| OH_AI_STATUS_LITE_OUT_OF_TENSOR_RANGE | MindSpore Lite tensor overflow.|
681| OH_AI_STATUS_LITE_INPUT_TENSOR_ERROR | MindSpore Lite input tensor error.|
682| OH_AI_STATUS_LITE_REENTRANT_ERROR | MindSpore Lite reentry error.|
683| OH_AI_STATUS_LITE_GRAPH_FILE_ERROR | MindSpore Lite file error.|
684| OH_AI_STATUS_LITE_NOT_FIND_OP | MindSpore Lite operator not found.|
685| OH_AI_STATUS_LITE_INVALID_OP_NAME | MindSpore Lite invalid operators.|
686| OH_AI_STATUS_LITE_INVALID_OP_ATTR | MindSpore Lite invalid operator hyperparameters.|
687| OH_AI_STATUS_LITE_OP_EXECUTE_FAILURE | MindSpore Lite operator execution failure.|
688| OH_AI_STATUS_LITE_FORMAT_ERROR | MindSpore Lite tensor format error.|
689| OH_AI_STATUS_LITE_INFER_ERROR | MindSpore Lite shape inference error.|
690| OH_AI_STATUS_LITE_INFER_INVALID | MindSpore Lite invalid shape inference.|
691| OH_AI_STATUS_LITE_INPUT_PARAM_INVALID | MindSpore Lite invalid input parameters.|
692
693
694## Function Description
695
696
697### OH_AI_ContextAddDeviceInfo()
698
699```
700OH_AI_API void OH_AI_ContextAddDeviceInfo (OH_AI_ContextHandle context, OH_AI_DeviceInfoHandle device_info )
701```
702
703**Description**
704
705Attaches the custom device information to the inference context.
706
707**Since**: 9
708
709**Parameters**
710
711| Name| Description|
712| -------- | -------- |
713| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
714| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
715
716
717### OH_AI_ContextCreate()
718
719```
720OH_AI_API OH_AI_ContextHandle OH_AI_ContextCreate ()
721```
722
723**Description**
724
725Creates a context object.
726
727**Since**: 9
728
729**Returns**
730
731[OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.
732
733
734### OH_AI_ContextDestroy()
735
736```
737OH_AI_API void OH_AI_ContextDestroy (OH_AI_ContextHandle * context)
738```
739
740**Description**
741
742Destroys a context object.
743
744**Since**: 9
745
746**Parameters**
747
748| Name| Description|
749| -------- | -------- |
750| context | Level-2 pointer to [OH_AI_ContextHandle](#oh_ai_contexthandle). After the context is destroyed, the pointer is set to null. |
751
752
753### OH_AI_ContextGetEnableParallel()
754
755```
756OH_AI_API bool OH_AI_ContextGetEnableParallel (const OH_AI_ContextHandle context)
757```
758
759**Description**
760
761Checks whether parallelism between operators is supported.
762
763**Since**: 9
764
765**Parameters**
766
767| Name| Description|
768| -------- | -------- |
769| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
770
771**Returns**
772
773Whether parallelism between operators is supported. The value **true** means that parallelism between operators is supported, and the value **false** means the opposite.
774
775
776### OH_AI_ContextGetThreadAffinityCoreList()
777
778```
779OH_AI_API const int32_t* OH_AI_ContextGetThreadAffinityCoreList (const OH_AI_ContextHandle context, size_t * core_num )
780```
781
782**Description**
783
784Obtains the list of bound CPU cores.
785
786**Since**: 9
787
788**Parameters**
789
790| Name| Description|
791| -------- | -------- |
792| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
793| core_num | Number of CPU cores.|
794
795**Returns**
796
797CPU core binding list. This list is managed by [OH_AI_ContextHandle](#oh_ai_contexthandle). The caller does not need to destroy it manually.
798
799
800### OH_AI_ContextGetThreadAffinityMode()
801
802```
803OH_AI_API int OH_AI_ContextGetThreadAffinityMode (const OH_AI_ContextHandle context)
804```
805
806**Description**
807
808Obtains the affinity mode for binding runtime threads to CPU cores.
809
810**Since**: 9
811
812**Parameters**
813
814| Name| Description|
815| -------- | -------- |
816| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
817
818**Returns**
819
820Affinity mode. **0**: no affinities; **1**: big cores first; **2**: medium cores first
821
822
823### OH_AI_ContextGetThreadNum()
824
825```
826OH_AI_API int32_t OH_AI_ContextGetThreadNum (const OH_AI_ContextHandle context)
827```
828
829**Description**
830
831Obtains the number of threads.
832
833**Since**: 9
834
835**Parameters**
836
837| Name| Description|
838| -------- | -------- |
839| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
840
841**Returns**
842
843Number of threads.
844
845
846### OH_AI_ContextSetEnableParallel()
847
848```
849OH_AI_API void OH_AI_ContextSetEnableParallel (OH_AI_ContextHandle context, bool is_parallel )
850```
851
852**Description**
853
854Sets whether to enable parallelism between operators. The setting is ineffective because the feature of this API is not yet available.
855
856**Since**: 9
857
858**Parameters**
859
860| Name| Description|
861| -------- | -------- |
862| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
863| is_parallel | Whether parallelism between operators is supported. The value **true** means that parallelism between operators is supported, and the value **false** means the opposite.|
864
865
866### OH_AI_ContextSetThreadAffinityCoreList()
867
868```
869OH_AI_API void OH_AI_ContextSetThreadAffinityCoreList (OH_AI_ContextHandle context, const int32_t * core_list, size_t core_num )
870```
871
872**Description**
873
874Sets the list of CPU cores bound to a runtime thread.
875
876For example, if **core_list** is set to **[2,6,8]**, threads run on the 2nd, 6th, and 8th CPU cores. If [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) and [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) are called for the same context object, the **core_list** parameter of [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) takes effect, but the **mode** parameter of [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) does not.
877
878**Since**: 9
879
880**Parameters**
881
882| Name| Description|
883| -------- | -------- |
884| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
885| core_list | List of bound CPU cores.|
886| core_num | Number of cores, which indicates the length of **core_list**.|
887
888
889### OH_AI_ContextSetThreadAffinityMode()
890
891```
892OH_AI_API void OH_AI_ContextSetThreadAffinityMode (OH_AI_ContextHandle context, int mode )
893```
894
895**Description**
896
897Sets the affinity mode for binding runtime threads to CPU cores, which are classified into large, medium, and small cores based on the CPU frequency. You only need to bind the large or medium cores, but not small cores.
898
899**Since**: 9
900
901**Parameters**
902
903| Name| Description|
904| -------- | -------- |
905| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
906| mode | Affinity mode. **0**: no affinities; **1**: big cores first; **2**: medium cores first|
907
908
909### OH_AI_ContextSetThreadNum()
910
911```
912OH_AI_API void OH_AI_ContextSetThreadNum (OH_AI_ContextHandle context, int32_t thread_num )
913```
914
915**Description**
916
917Sets the number of runtime threads.
918
919**Since**: 9
920
921**Parameters**
922
923| Name| Description|
924| -------- | -------- |
925| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
926| thread_num | Number of runtime threads.|
927
928
929### OH_AI_CreateNNRTDeviceInfoByName()
930
931```
932OH_AI_API OH_AI_DeviceInfoHandle OH_AI_CreateNNRTDeviceInfoByName (const char * name)
933```
934
935**Description**
936
937Searches for the NNRt device with the specified name and creates the NNRt device information based on the information about the first found NNRt device.
938
939**Since**: 10
940
941**Parameters**
942
943| Name| Description|
944| -------- | -------- |
945| name | Name of the NNRt device.|
946
947**Returns**
948
949[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.
950
951
952### OH_AI_CreateNNRTDeviceInfoByType()
953
954```
955OH_AI_API OH_AI_DeviceInfoHandle OH_AI_CreateNNRTDeviceInfoByType (OH_AI_NNRTDeviceType type)
956```
957
958**Description**
959
960Searches for the NNRt device with the specified type and creates the NNRt device information based on the information about the first found NNRt device.
961
962**Since**: 10
963
964**Parameters**
965
966| Name| Description|
967| -------- | -------- |
968| type | NNRt device type, which is specified by [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype).|
969
970**Returns**
971
972[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.
973
974
975### OH_AI_DestroyAllNNRTDeviceDescs()
976
977```
978OH_AI_API void OH_AI_DestroyAllNNRTDeviceDescs (NNRTDeviceDesc ** desc)
979```
980
981**Description**
982
983Destroys the NNRt device description array obtained by [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs).
984
985**Since**: 10
986
987**Parameters**
988
989| Name| Description|
990| -------- | -------- |
991| desc | Double pointer to the array of the NNRt device descriptions. After the operation is complete, the content pointed to by **desc** is set to **NULL**.|
992
993
994### OH_AI_DeviceInfoAddExtension()
995
996```
997OH_AI_API OH_AI_Status OH_AI_DeviceInfoAddExtension (OH_AI_DeviceInfoHandle device_info, const char * name, const char * value, size_t value_size )
998```
999
1000**Description**
1001
1002Adds extended configuration in the form of key/value pairs to the device information. This function is available only for NNRt device information.
1003
1004>**NOTE**<br>Key value pairs currently supported include {"CachePath": "YourCachePath"}, {"CacheVersion": "YourCacheVersion"}, and {"QuantParam": "YourQuantConfig"}. Replace the actual value as required.
1005
1006**Since**: 10
1007
1008**Parameters**
1009
1010| Name| Description|
1011| -------- | -------- |
1012| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1013| name | Key in an extended key/value pair. The value is a C string.|
1014| value |  Start address of the value in an extended key/value pair.|
1015| value_size | Length of the value in an extended key/value pair.|
1016
1017**Returns**
1018
1019Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
1020
1021
1022### OH_AI_DeviceInfoCreate()
1023
1024```
1025OH_AI_API OH_AI_DeviceInfoHandle OH_AI_DeviceInfoCreate (OH_AI_DeviceType device_type)
1026```
1027
1028**Description**
1029
1030Creates a device information object.
1031
1032**Since**: 9
1033
1034**Parameters**
1035
1036| Name| Description|
1037| -------- | -------- |
1038| device_type | Device type, which is specified by [OH_AI_DeviceType](#oh_ai_devicetype).|
1039
1040**Returns**
1041
1042[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.
1043
1044
1045### OH_AI_DeviceInfoDestroy()
1046
1047```
1048OH_AI_API void OH_AI_DeviceInfoDestroy (OH_AI_DeviceInfoHandle * device_info)
1049```
1050
1051**Description**
1052
1053Destroys a device information object. Note: After the device information instance is added to the context, the caller does not need to destroy it manually.
1054
1055**Since**: 9
1056
1057**Parameters**
1058
1059| Name| Description|
1060| -------- | -------- |
1061| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1062
1063
1064### OH_AI_DeviceInfoGetDeviceId()
1065
1066```
1067OH_AI_API size_t OH_AI_DeviceInfoGetDeviceId (const OH_AI_DeviceInfoHandle device_info)
1068```
1069
1070**Description**
1071
1072Obtains the ID of an NNRt device. This function is available only for NNRt devices.
1073
1074**Since**: 10
1075
1076**Parameters**
1077
1078| Name| Description|
1079| -------- | -------- |
1080| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1081
1082**Returns**
1083
1084NNRt device ID.
1085
1086
1087### OH_AI_DeviceInfoGetDeviceType()
1088
1089```
1090OH_AI_API OH_AI_DeviceType OH_AI_DeviceInfoGetDeviceType (const OH_AI_DeviceInfoHandle device_info)
1091```
1092
1093**Description**
1094
1095Obtains the device type.
1096
1097**Since**: 9
1098
1099**Parameters**
1100
1101| Name| Description|
1102| -------- | -------- |
1103| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1104
1105**Returns**
1106
1107Type of the provider device.
1108
1109
1110### OH_AI_DeviceInfoGetEnableFP16()
1111
1112```
1113OH_AI_API bool OH_AI_DeviceInfoGetEnableFP16 (const OH_AI_DeviceInfoHandle device_info)
1114```
1115
1116**Description**
1117
1118Checks whether float16 inference is enabled. This function is available only for CPU and GPU devices.
1119
1120**Since**: 9
1121
1122**Parameters**
1123
1124| Name| Description|
1125| -------- | -------- |
1126| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1127
1128**Returns**
1129
1130Whether float16 inference is enabled.
1131
1132
1133### OH_AI_DeviceInfoGetFrequency()
1134
1135```
1136OH_AI_API int OH_AI_DeviceInfoGetFrequency (const OH_AI_DeviceInfoHandle device_info)
1137```
1138
1139**Description**
1140
1141Obtains the NPU frequency type. This function is available only for NPU devices.
1142
1143**Since**: 9
1144
1145**Parameters**
1146
1147| Name| Description|
1148| -------- | -------- |
1149| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1150
1151**Returns**
1152
1153NPU frequency type. The value ranges from **0** to **4**. **1**: low power consumption; **2**: balanced; **3**: high performance; **4**: ultra-high performance
1154
1155
1156### OH_AI_DeviceInfoGetPerformanceMode()
1157
1158```
1159OH_AI_API OH_AI_PerformanceMode OH_AI_DeviceInfoGetPerformanceMode (const OH_AI_DeviceInfoHandle device_info)
1160```
1161
1162**Description**
1163
1164Obtains the performance mode of an NNRt device. This function is available only for NNRt devices.
1165
1166**Since**: 10
1167
1168**Parameters**
1169
1170| Name| Description|
1171| -------- | -------- |
1172| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1173
1174**Returns**
1175
1176NNRt performance mode, which is specified by [OH_AI_PerformanceMode](#oh_ai_performancemode).
1177
1178
1179### OH_AI_DeviceInfoGetPriority()
1180
1181```
1182OH_AI_API OH_AI_Priority OH_AI_DeviceInfoGetPriority (const OH_AI_DeviceInfoHandle device_info)
1183```
1184
1185**Description**
1186
1187Obtains the priority of an NNRt task. This function is available only for NNRt devices.
1188
1189**Since**: 10
1190
1191**Parameters**
1192
1193| Name| Description|
1194| -------- | -------- |
1195| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1196
1197**Returns**
1198
1199NNRt task priority, which is specified by [OH_AI_Priority](#oh_ai_priority).
1200
1201
1202### OH_AI_DeviceInfoGetProvider()
1203
1204```
1205OH_AI_API const char* OH_AI_DeviceInfoGetProvider (const OH_AI_DeviceInfoHandle device_info)
1206```
1207
1208**Description**
1209
1210Obtains the provider name.
1211
1212**Since**: 9
1213
1214**Parameters**
1215
1216| Name| Description|
1217| -------- | -------- |
1218| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1219
1220**Returns**
1221
1222Provider name.
1223
1224
1225### OH_AI_DeviceInfoGetProviderDevice()
1226
1227```
1228OH_AI_API const char* OH_AI_DeviceInfoGetProviderDevice (const OH_AI_DeviceInfoHandle device_info)
1229```
1230
1231**Description**
1232
1233Obtains the name of a provider device.
1234
1235**Since**: 9
1236
1237**Parameters**
1238
1239| Name| Description|
1240| -------- | -------- |
1241| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1242
1243**Returns**
1244
1245Name of the provider device.
1246
1247
1248### OH_AI_DeviceInfoSetDeviceId()
1249
1250```
1251OH_AI_API void OH_AI_DeviceInfoSetDeviceId (OH_AI_DeviceInfoHandle device_info, size_t device_id )
1252```
1253
1254**Description**
1255
1256Sets the ID of an NNRt device. This function is available only for NNRt devices.
1257
1258**Since**: 10
1259
1260**Parameters**
1261
1262| Name| Description|
1263| -------- | -------- |
1264| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1265| device_id | NNRt device ID.|
1266
1267
1268### OH_AI_DeviceInfoSetEnableFP16()
1269
1270```
1271OH_AI_API void OH_AI_DeviceInfoSetEnableFP16 (OH_AI_DeviceInfoHandle device_info, bool is_fp16 )
1272```
1273
1274**Description**
1275
1276Sets whether to enable float16 inference. This function is available only for CPU and GPU devices.
1277
1278**Since**: 9
1279
1280**Parameters**
1281
1282| Name| Description|
1283| -------- | -------- |
1284| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1285| is_fp16 | Whether to enable the float16 inference mode.|
1286
1287
1288### OH_AI_DeviceInfoSetFrequency()
1289
1290```
1291OH_AI_API void OH_AI_DeviceInfoSetFrequency (OH_AI_DeviceInfoHandle device_info, int frequency )
1292```
1293
1294**Description**
1295
1296Sets the NPU frequency type. This function is available only for NPU devices.
1297
1298**Since**: 9
1299
1300**Parameters**
1301
1302| Name| Description|
1303| -------- | -------- |
1304| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1305| frequency | NPU frequency type. The value ranges from **0** to **4**. The default value is **3**. **1**: low power consumption; **2**: balanced; **3**: high performance; **4**: ultra-high performance|
1306
1307
1308### OH_AI_DeviceInfoSetPerformanceMode()
1309
1310```
1311OH_AI_API void OH_AI_DeviceInfoSetPerformanceMode (OH_AI_DeviceInfoHandle device_info, OH_AI_PerformanceMode mode )
1312```
1313
1314**Description**
1315
1316Sets the performance mode of an NNRt device. This function is available only for NNRt devices.
1317
1318**Since**: 10
1319
1320**Parameters**
1321
1322| Name| Description|
1323| -------- | -------- |
1324| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1325| mode | NNRt performance mode, which is specified by [OH_AI_PerformanceMode](#oh_ai_performancemode).|
1326
1327
1328### OH_AI_DeviceInfoSetPriority()
1329
1330```
1331OH_AI_API void OH_AI_DeviceInfoSetPriority (OH_AI_DeviceInfoHandle device_info, OH_AI_Priority priority )
1332```
1333
1334**Description**
1335
1336Sets the priority of an NNRt task. This function is available only for NNRt devices.
1337
1338**Since**: 10
1339
1340**Parameters**
1341
1342| Name| Description|
1343| -------- | -------- |
1344| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1345| priority | NNRt task priority, which is specified by [OH_AI_Priority](#oh_ai_priority).|
1346
1347
1348### OH_AI_DeviceInfoSetProvider()
1349
1350```
1351OH_AI_API void OH_AI_DeviceInfoSetProvider (OH_AI_DeviceInfoHandle device_info, const char * provider )
1352```
1353
1354**Description**
1355
1356Sets the name of the provider.
1357
1358**Since**: 9
1359
1360**Parameters**
1361
1362| Name| Description|
1363| -------- | -------- |
1364| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1365| provider | Provider name.|
1366
1367
1368### OH_AI_DeviceInfoSetProviderDevice()
1369
1370```
1371OH_AI_API void OH_AI_DeviceInfoSetProviderDevice (OH_AI_DeviceInfoHandle device_info, const char * device )
1372```
1373
1374**Description**
1375
1376Sets the name of a provider device.
1377
1378**Since**: 9
1379
1380**Parameters**
1381
1382| Name| Description|
1383| -------- | -------- |
1384| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1385| device | Name of the provider device, for example, CPU.|
1386
1387
1388### OH_AI_ExportModel()
1389
1390```
1391OH_AI_API OH_AI_Status OH_AI_ExportModel (OH_AI_ModelHandle model, OH_AI_ModelType model_type, const char * model_file, OH_AI_QuantizationType quantization_type, bool export_inference_only, char ** output_tensor_name, size_t num )
1392```
1393
1394**Description**
1395
1396Exports a training model. This API is used only for on-device training.
1397
1398**Since**: 11
1399
1400**Parameters**
1401
1402| Name| Description|
1403| -------- | -------- |
1404| model | Pointer to the model object.|
1405| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
1406| model_file | Path of the exported model file.|
1407| quantization_type | Quantization type.|
1408| export_inference_only | Whether to export an inference model.|
1409| output_tensor_name | Output tensor of the exported model. This parameter is left blank by default, which indicates full export.|
1410| num | Number of output tensors.|
1411
1412**Returns**
1413
1414Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1415
1416### OH_AI_ExportModelBuffer()
1417
1418```
1419OH_AI_API OH_AI_Status OH_AI_ExportModelBuffer (OH_AI_ModelHandle model, OH_AI_ModelType model_type, char ** model_data, size_t * data_size, OH_AI_QuantizationType quantization_type, bool export_inference_only, char ** output_tensor_name, size_t num )
1420```
1421**Description**
1422Exports the memory cache of the training model. This API is used only for on-device training.
1423
1424**Since**: 11
1425
1426**Parameters**
1427
1428| Name| Description|
1429| -------- | -------- |
1430| model | Pointer to the model object. |
1431| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype). |
1432| model_data | Pointer to the buffer that stores the exported model file. |
1433| data_size | Buffer size. |
1434| quantization_type | Quantization type. |
1435| export_inference_only | Whether to export an inference model. |
1436| output_tensor_name | Output tensor of the exported model. This parameter is left blank by default, which indicates full export. |
1437| num | Number of output tensors. |
1438
1439**Returns**
1440
1441Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1442
1443
1444### OH_AI_ExportWeightsCollaborateWithMicro()
1445
1446```
1447OH_AI_API OH_AI_Status OH_AI_ExportWeightsCollaborateWithMicro (OH_AI_ModelHandle model, OH_AI_ModelType model_type, const char * weight_file, bool is_inference, bool enable_fp16, char ** changeable_weights_name, size_t num )
1448```
1449
1450**Description**
1451
1452Exports the weight file of the training model for micro inference. This API is used only for on-device training.
1453
1454**Since**: 11
1455
1456**Parameters**
1457
1458| Name| Description|
1459| -------- | -------- |
1460| model | Pointer to the model object.|
1461| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
1462| weight_file | Path of the exported weight file.|
1463| is_inference | Whether to export inference models. Currently, this parameter can only be set to **true**.|
1464| enable_fp16 | Whether to save floating-point weights in float16 format.|
1465| changeable_weights_name | Name of the weight tensor with a variable shape.|
1466| num | Number of weight tensors with a variable shape.|
1467
1468**Returns**
1469
1470Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1471
1472
1473### OH_AI_GetAllNNRTDeviceDescs()
1474
1475```
1476OH_AI_API NNRTDeviceDesc* OH_AI_GetAllNNRTDeviceDescs (size_t * num)
1477```
1478
1479**Description**
1480
1481Obtains the descriptions of all NNRt devices in the system.
1482
1483**Since**: 10
1484
1485**Parameters**
1486
1487| Name| Description|
1488| -------- | -------- |
1489| num | Number of NNRt devices.|
1490
1491**Returns**
1492
1493Pointer to the array of the NNRt device descriptions. If the operation fails, **NULL** is returned.
1494
1495
1496### OH_AI_GetDeviceIdFromNNRTDeviceDesc()
1497
1498```
1499OH_AI_API size_t OH_AI_GetDeviceIdFromNNRTDeviceDesc (const NNRTDeviceDesc * desc)
1500```
1501
1502**Description**
1503
1504Obtains the NNRt device ID from the specified NNRt device description. Note that this ID is valid only for NNRt devices.
1505
1506**Since**: 10
1507
1508**Parameters**
1509
1510| Name| Description|
1511| -------- | -------- |
1512| desc | Pointer to the NNRt device description.|
1513
1514**Returns**
1515
1516NNRt device ID.
1517
1518
1519### OH_AI_GetElementOfNNRTDeviceDescs()
1520
1521```
1522OH_AI_API NNRTDeviceDesc* OH_AI_GetElementOfNNRTDeviceDescs (NNRTDeviceDesc * descs, size_t index )
1523```
1524
1525**Description**
1526
1527Obtains the pointer to an element in the NNRt device description array.
1528
1529**Since**: 10
1530
1531**Parameters**
1532
1533| Name| Description|
1534| -------- | -------- |
1535| descs | NNRt device description array.|
1536| index | Index of an array element.|
1537
1538**Returns**
1539
1540Pointer to an element in the NNRt device description array.
1541
1542
1543### OH_AI_GetNameFromNNRTDeviceDesc()
1544
1545```
1546OH_AI_API const char* OH_AI_GetNameFromNNRTDeviceDesc (const NNRTDeviceDesc * desc)
1547```
1548
1549**Description**
1550
1551Obtains the NNRt device name from the specified NNRt device description.
1552
1553**Since**: 10
1554
1555**Parameters**
1556
1557| Name| Description|
1558| -------- | -------- |
1559| desc | Pointer to the NNRt device description.|
1560
1561**Returns**
1562
1563NNRt device name. The value is a pointer that points to a constant string, which is held by **desc**. The caller does not need to destroy it separately.
1564
1565
1566### OH_AI_GetTypeFromNNRTDeviceDesc()
1567
1568```
1569OH_AI_API OH_AI_NNRTDeviceType OH_AI_GetTypeFromNNRTDeviceDesc (const NNRTDeviceDesc * desc)
1570```
1571
1572**Description**
1573
1574Obtains the NNRt device type from the specified NNRt device description.
1575
1576**Since**: 10
1577
1578**Parameters**
1579
1580| Name| Description|
1581| -------- | -------- |
1582| desc | Pointer to the NNRt device description.|
1583
1584**Returns**
1585
1586NNRt device type, which is specified by [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype).
1587
1588
1589### OH_AI_ModelBuild()
1590
1591```
1592OH_AI_API OH_AI_Status OH_AI_ModelBuild (OH_AI_ModelHandle model, const void * model_data, size_t data_size, OH_AI_ModelType model_type, const OH_AI_ContextHandle model_context )
1593```
1594
1595**Description**
1596
1597Loads and builds a MindSpore model from the memory buffer.
1598
1599Note that the same {\@Link OH_AI_ContextHandle} object can only be passed to {\@Link OH_AI_ModelBuild} or {\@Link OH_AI_ModelBuildFromFile} once. If you call this function multiple times, make sure that you create multiple {\@Link OH_AI_ContextHandle} objects accordingly.
1600
1601**Since**: 9
1602
1603**Parameters**
1604
1605| Name| Description|
1606| -------- | -------- |
1607| model | Pointer to the model object.|
1608| model_data | Address of the loaded model data in the memory.|
1609| data_size | Length of the model data.|
1610| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
1611| model_context | Model runtime context, which is specified by [OH_AI_ContextHandle](#oh_ai_contexthandle).|
1612
1613**Returns**
1614
1615Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **MSStatus::kMSStatusSuccess** indicates that the operation is successful.
1616
1617
1618### OH_AI_ModelBuildFromFile()
1619
1620```
1621OH_AI_API OH_AI_Status OH_AI_ModelBuildFromFile (OH_AI_ModelHandle model, const char * model_path, OH_AI_ModelType model_type, const OH_AI_ContextHandle model_context )
1622```
1623
1624**Description**
1625
1626Loads and builds a MindSpore model from a model file.
1627
1628Note that the same {\@Link OH_AI_ContextHandle} object can only be passed to {\@Link OH_AI_ModelBuild} or {\@Link OH_AI_ModelBuildFromFile} once. If you call this function multiple times, make sure that you create multiple {\@Link OH_AI_ContextHandle} objects accordingly.
1629
1630**Since**: 9
1631
1632**Parameters**
1633
1634| Name| Description|
1635| -------- | -------- |
1636| model | Pointer to the model object.|
1637| model_path | Path of the model file.|
1638| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
1639| model_context | Model runtime context, which is specified by [OH_AI_ContextHandle](#oh_ai_contexthandle).|
1640
1641**Returns**
1642
1643Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **MSStatus::kMSStatusSuccess** indicates that the operation is successful.
1644
1645
1646### OH_AI_ModelCreate()
1647
1648```
1649OH_AI_API OH_AI_ModelHandle OH_AI_ModelCreate ()
1650```
1651
1652**Description**
1653
1654Creates a model object.
1655
1656**Since**: 9
1657
1658**Returns**
1659
1660Pointer to the model object.
1661
1662
1663### OH_AI_ModelDestroy()
1664
1665```
1666OH_AI_API void OH_AI_ModelDestroy (OH_AI_ModelHandle * model)
1667```
1668
1669**Description**
1670
1671Destroys a model object.
1672
1673**Since**: 9
1674
1675**Parameters**
1676
1677| Name| Description|
1678| -------- | -------- |
1679| model | Pointer to the model object.|
1680
1681
1682### OH_AI_ModelGetInputByTensorName()
1683
1684```
1685OH_AI_API OH_AI_TensorHandle OH_AI_ModelGetInputByTensorName (const OH_AI_ModelHandle model, const char * tensor_name )
1686```
1687
1688**Description**
1689
1690Obtains the input tensor of a model by tensor name.
1691
1692**Since**: 9
1693
1694**Parameters**
1695
1696| Name| Description|
1697| -------- | -------- |
1698| model | Pointer to the model object.|
1699| tensor_name | Tensor name.|
1700
1701**Returns**
1702
1703Pointer to the input tensor indicated by **tensor_name**. If the tensor does not exist in the input, **null** will be returned.
1704
1705
1706### OH_AI_ModelGetInputs()
1707
1708```
1709OH_AI_API OH_AI_TensorHandleArray OH_AI_ModelGetInputs (const OH_AI_ModelHandle model)
1710```
1711
1712**Description**
1713
1714Obtains the input tensor array structure of a model.
1715
1716**Since**: 9
1717
1718**Parameters**
1719
1720| Name| Description|
1721| -------- | -------- |
1722| model | Pointer to the model object.|
1723
1724**Returns**
1725
1726Tensor array structure corresponding to the model input.
1727
1728
1729### OH_AI_ModelGetLearningRate()
1730
1731```
1732OH_AI_API float OH_AI_ModelGetLearningRate (OH_AI_ModelHandle model)
1733```
1734
1735**Description**
1736
1737Obtains the learning rate for model training. This API is used only for on-device training.
1738
1739**Since**: 11
1740
1741**Parameters**
1742
1743| Name| Description|
1744| -------- | -------- |
1745| model | Pointer to the model object.|
1746
1747**Returns**
1748
1749Learning rate. If no optimizer is set, the value is **0.0**.
1750
1751
1752### OH_AI_ModelGetOutputByTensorName()
1753
1754```
1755OH_AI_API OH_AI_TensorHandle OH_AI_ModelGetOutputByTensorName (const OH_AI_ModelHandle model, const char * tensor_name )
1756```
1757
1758**Description**
1759
1760Obtains the output tensor of a model by tensor name.
1761
1762**Since**: 9
1763
1764**Parameters**
1765
1766| Name| Description|
1767| -------- | -------- |
1768| model | Pointer to the model object.|
1769| tensor_name | Tensor name.|
1770
1771**Returns**
1772
1773Pointer to the input tensor indicated by **tensor_name**. If the tensor does not exist, **null** will be returned.
1774
1775
1776### OH_AI_ModelGetOutputs()
1777
1778```
1779OH_AI_API OH_AI_TensorHandleArray OH_AI_ModelGetOutputs (const OH_AI_ModelHandle model)
1780```
1781
1782**Description**
1783
1784Obtains the output tensor array structure of a model.
1785
1786**Since**: 9
1787
1788**Parameters**
1789
1790| Name| Description|
1791| -------- | -------- |
1792| model | Pointer to the model object.|
1793
1794**Returns**
1795
1796Tensor array structure corresponding to the model output.
1797
1798
1799### OH_AI_ModelGetTrainMode()
1800
1801```
1802OH_AI_API bool OH_AI_ModelGetTrainMode (OH_AI_ModelHandle model)
1803```
1804
1805**Description**
1806
1807Obtains the training mode.
1808
1809**Since**: 11
1810
1811**Parameters**
1812
1813| Name| Description|
1814| -------- | -------- |
1815| model | Pointer to the model object.|
1816
1817**Returns**
1818
1819Whether the training mode is used.
1820
1821
1822### OH_AI_ModelGetWeights()
1823
1824```
1825OH_AI_API OH_AI_TensorHandleArray OH_AI_ModelGetWeights (OH_AI_ModelHandle model)
1826```
1827
1828**Description**
1829
1830Obtains all weight tensors of a model. This API is used only for on-device training.
1831
1832**Since**: 11
1833
1834**Parameters**
1835
1836| Name| Description|
1837| -------- | -------- |
1838| model | Pointer to the model object.|
1839
1840**Returns**
1841
1842All weight tensors of the model.
1843
1844
1845### OH_AI_ModelPredict()
1846
1847```
1848OH_AI_API OH_AI_Status OH_AI_ModelPredict (OH_AI_ModelHandle model, const OH_AI_TensorHandleArray inputs, OH_AI_TensorHandleArray * outputs, const OH_AI_KernelCallBack before, const OH_AI_KernelCallBack after )
1849```
1850
1851**Description**
1852
1853Performs model inference.
1854
1855**Since**: 9
1856
1857**Parameters**
1858
1859| Name| Description|
1860| -------- | -------- |
1861| model | Pointer to the model object.|
1862| inputs | Tensor array structure corresponding to the model input.|
1863| outputs | Pointer to the tensor array structure corresponding to the model output.|
1864| before | Callback function executed before model inference.|
1865| after | Callback function executed after model inference.|
1866
1867**Returns**
1868
1869Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **MSStatus::kMSStatusSuccess** indicates that the operation is successful.
1870
1871
1872### OH_AI_ModelResize()
1873
1874```
1875OH_AI_API OH_AI_Status OH_AI_ModelResize (OH_AI_ModelHandle model, const OH_AI_TensorHandleArray inputs, OH_AI_ShapeInfo * shape_infos, size_t shape_info_num )
1876```
1877
1878**Description**
1879
1880Adjusts the input tensor shapes of a built model.
1881
1882**Since**: 9
1883
1884**Parameters**
1885
1886| Name| Description|
1887| -------- | -------- |
1888| model | Pointer to the model object.|
1889| inputs | Tensor array structure corresponding to the model input.|
1890| shape_infos | Input shape information array, which consists of tensor shapes arranged in the model input sequence. The model adjusts the tensor shapes in sequence.|
1891| shape_info_num | Length of the shape information array.|
1892
1893**Returns**
1894
1895Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **MSStatus::kMSStatusSuccess** indicates that the operation is successful.
1896
1897
1898### OH_AI_ModelSetLearningRate()
1899
1900```
1901OH_AI_API OH_AI_Status OH_AI_ModelSetLearningRate (OH_AI_ModelHandle model, float learning_rate )
1902```
1903
1904**Description**
1905
1906Sets the learning rate for model training. This API is used only for on-device training.
1907
1908**Since**: 11
1909
1910**Parameters**
1911
1912| Name| Description|
1913| -------- | -------- |
1914| learning_rate | Learning rate.|
1915
1916**Returns**
1917
1918Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1919
1920
1921### OH_AI_ModelSetTrainMode()
1922
1923```
1924OH_AI_API OH_AI_Status OH_AI_ModelSetTrainMode (OH_AI_ModelHandle model, bool train )
1925```
1926
1927**Description**
1928
1929Sets the training mode. This API is used only for on-device training.
1930
1931**Since**: 11
1932
1933**Parameters**
1934
1935| Name| Description|
1936| -------- | -------- |
1937| model | Pointer to the model object.|
1938| train | Whether the training mode is used.|
1939
1940**Returns**
1941
1942Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1943
1944
1945### OH_AI_ModelSetupVirtualBatch()
1946
1947```
1948OH_AI_API OH_AI_Status OH_AI_ModelSetupVirtualBatch (OH_AI_ModelHandle model, int virtual_batch_multiplier, float lr, float momentum )
1949```
1950
1951**Description**
1952
1953Sets the virtual batch for training. This API is used only for on-device training.
1954
1955**Since**: 11
1956
1957**Parameters**
1958
1959| Name| Description|
1960| -------- | -------- |
1961| model | Pointer to the model object.|
1962| virtual_batch_multiplier | Virtual batch multiplier. If the value is less than **1**, the virtual batch is disabled.|
1963| lr | Learning rate. The default value is **-1.0f**.|
1964| momentum | Momentum. The default value is **-1.0f**.|
1965
1966**Returns**
1967
1968Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1969
1970
1971### OH_AI_ModelUpdateWeights()
1972
1973```
1974OH_AI_API OH_AI_Status OH_AI_ModelUpdateWeights (OH_AI_ModelHandle model, const OH_AI_TensorHandleArray new_weights )
1975```
1976
1977**Description**
1978
1979Updates the weight tensors of a model. This API is used only for on-device training.
1980
1981**Since**: 11
1982
1983**Parameters**
1984
1985| Name| Description|
1986| -------- | -------- |
1987| new_weights | Weight tensors to be updated.|
1988
1989**Returns**
1990
1991Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
1992
1993
1994### OH_AI_RunStep()
1995
1996```
1997OH_AI_API OH_AI_Status OH_AI_RunStep (OH_AI_ModelHandle model, const OH_AI_KernelCallBack before, const OH_AI_KernelCallBack after )
1998```
1999
2000**Description**
2001
2002Defines a single-step training model. This API is used only for on-device training.
2003
2004**Since**: 11
2005
2006**Parameters**
2007
2008| Name| Description|
2009| -------- | -------- |
2010| model | Pointer to the model object.|
2011| before | Callback function executed before model inference.|
2012| after | Callback function executed after model inference.|
2013
2014**Returns**
2015
2016Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
2017
2018
2019### OH_AI_TensorClone()
2020
2021```
2022OH_AI_API OH_AI_TensorHandle OH_AI_TensorClone (OH_AI_TensorHandle tensor)
2023```
2024
2025**Description**
2026
2027Clones a tensor.
2028
2029**Since**: 9
2030
2031**Parameters**
2032
2033| Name| Description|
2034| -------- | -------- |
2035| tensor | Pointer to the tensor to clone.|
2036
2037**Returns**
2038
2039Handle of a tensor object.
2040
2041
2042### OH_AI_TensorCreate()
2043
2044```
2045OH_AI_API OH_AI_TensorHandle OH_AI_TensorCreate (const char * name, OH_AI_DataType type, const int64_t * shape, size_t shape_num, const void * data, size_t data_len )
2046```
2047
2048**Description**
2049
2050Creates a tensor object.
2051
2052**Since**: 9
2053
2054**Parameters**
2055
2056| Name| Description|
2057| -------- | -------- |
2058| name | Tensor name.|
2059| type | Tensor data type.|
2060| shape | Tensor dimension array.|
2061| shape_num | Length of the tensor dimension array.|
2062| data | Data pointer.|
2063| data_len | Data length.|
2064
2065**Returns**
2066
2067Handle of a tensor object.
2068
2069
2070### OH_AI_TensorDestroy()
2071
2072```
2073OH_AI_API void OH_AI_TensorDestroy (OH_AI_TensorHandle * tensor)
2074```
2075
2076**Description**
2077
2078Destroys a tensor object.
2079
2080**Since**: 9
2081
2082**Parameters**
2083
2084| Name| Description|
2085| -------- | -------- |
2086| tensor | Level-2 pointer to the tensor handle.|
2087
2088
2089### OH_AI_TensorGetData()
2090
2091```
2092OH_AI_API const void* OH_AI_TensorGetData (const OH_AI_TensorHandle tensor)
2093```
2094
2095**Description**
2096
2097Obtains the pointer to tensor data.
2098
2099**Since**: 9
2100
2101**Parameters**
2102
2103| Name| Description|
2104| -------- | -------- |
2105| tensor | Handle of the tensor object.|
2106
2107**Returns**
2108
2109Pointer to tensor data.
2110
2111
2112### OH_AI_TensorGetDataSize()
2113
2114```
2115OH_AI_API size_t OH_AI_TensorGetDataSize (const OH_AI_TensorHandle tensor)
2116```
2117
2118**Description**
2119
2120Obtains the number of bytes of the tensor data.
2121
2122**Since**: 9
2123
2124**Parameters**
2125
2126| Name| Description|
2127| -------- | -------- |
2128| tensor | Handle of the tensor object.|
2129
2130**Returns**
2131
2132Number of bytes of the tensor data.
2133
2134
2135### OH_AI_TensorGetDataType()
2136
2137```
2138OH_AI_API OH_AI_DataType OH_AI_TensorGetDataType (const OH_AI_TensorHandle tensor)
2139```
2140
2141**Description**
2142
2143Obtains the tensor type.
2144
2145**Since**: 9
2146
2147**Parameters**
2148
2149| Name| Description|
2150| -------- | -------- |
2151| tensor | Handle of the tensor object.|
2152
2153**Returns**
2154
2155Tensor data type.
2156
2157
2158### OH_AI_TensorGetElementNum()
2159
2160```
2161OH_AI_API int64_t OH_AI_TensorGetElementNum (const OH_AI_TensorHandle tensor)
2162```
2163
2164**Description**
2165
2166Obtains the number of tensor elements.
2167
2168**Since**: 9
2169
2170**Parameters**
2171
2172| Name| Description|
2173| -------- | -------- |
2174| tensor | Handle of the tensor object.|
2175
2176**Returns**
2177
2178Number of tensor elements.
2179
2180
2181### OH_AI_TensorGetFormat()
2182
2183```
2184OH_AI_API OH_AI_Format OH_AI_TensorGetFormat (const OH_AI_TensorHandle tensor)
2185```
2186
2187**Description**
2188
2189Obtains the tensor data format.
2190
2191**Since**: 9
2192
2193**Parameters**
2194
2195| Name| Description|
2196| -------- | -------- |
2197| tensor | Handle of the tensor object.|
2198
2199**Returns**
2200
2201Tensor data format.
2202
2203
2204### OH_AI_TensorGetMutableData()
2205
2206```
2207OH_AI_API void* OH_AI_TensorGetMutableData (const OH_AI_TensorHandle tensor)
2208```
2209
2210**Description**
2211
2212Obtains the pointer to variable tensor data. If the data is empty, memory will be allocated.
2213
2214**Since**: 9
2215
2216**Parameters**
2217
2218| Name| Description|
2219| -------- | -------- |
2220| tensor | Handle of the tensor object.|
2221
2222**Returns**
2223
2224Pointer to tensor data.
2225
2226
2227### OH_AI_TensorGetName()
2228
2229```
2230OH_AI_API const char* OH_AI_TensorGetName (const OH_AI_TensorHandle tensor)
2231```
2232
2233**Description**
2234
2235Obtains the tensor name.
2236
2237**Since**: 9
2238
2239**Parameters**
2240
2241| Name| Description|
2242| -------- | -------- |
2243| tensor | Handle of the tensor object.|
2244
2245**Returns**
2246
2247Tensor name.
2248
2249
2250### OH_AI_TensorGetShape()
2251
2252```
2253OH_AI_API const int64_t* OH_AI_TensorGetShape (const OH_AI_TensorHandle tensor, size_t * shape_num )
2254```
2255
2256**Description**
2257
2258Obtains the tensor shape.
2259
2260**Since**: 9
2261
2262**Parameters**
2263
2264| Name| Description|
2265| -------- | -------- |
2266| tensor | Handle of the tensor object.|
2267| shape_num | Length of the tensor shape array.|
2268
2269**Returns**
2270
2271Shape array.
2272
2273
2274### OH_AI_TensorSetData()
2275
2276```
2277OH_AI_API void OH_AI_TensorSetData (OH_AI_TensorHandle tensor, void * data )
2278```
2279
2280**Description**
2281
2282Sets the tensor data.
2283
2284**Since**: 9
2285
2286**Parameters**
2287
2288| Name| Description|
2289| -------- | -------- |
2290| tensor | Handle of the tensor object.|
2291| data | Data pointer.|
2292
2293
2294### OH_AI_TensorSetDataType()
2295
2296```
2297OH_AI_API void OH_AI_TensorSetDataType (OH_AI_TensorHandle tensor, OH_AI_DataType type )
2298```
2299
2300**Description**
2301
2302Sets the data type of a tensor.
2303
2304**Since**: 9
2305
2306**Parameters**
2307
2308| Name| Description|
2309| -------- | -------- |
2310| tensor | Handle of the tensor object.|
2311| type | Data type, which is specified by [OH_AI_DataType](#oh_ai_datatype).|
2312
2313
2314### OH_AI_TensorSetFormat()
2315
2316```
2317OH_AI_API void OH_AI_TensorSetFormat (OH_AI_TensorHandle tensor, OH_AI_Format format )
2318```
2319
2320**Description**
2321
2322Sets the tensor data format.
2323
2324**Since**: 9
2325
2326**Parameters**
2327
2328| Name| Description|
2329| -------- | -------- |
2330| tensor | Handle of the tensor object.|
2331| format | Tensor data format.|
2332
2333
2334### OH_AI_TensorSetName()
2335
2336```
2337OH_AI_API void OH_AI_TensorSetName (OH_AI_TensorHandle tensor, const char * name )
2338```
2339
2340**Description**
2341
2342Sets the tensor name.
2343
2344**Since**: 9
2345
2346**Parameters**
2347
2348| Name| Description|
2349| -------- | -------- |
2350| tensor | Handle of the tensor object.|
2351| name | Tensor name.|
2352
2353
2354### OH_AI_TensorSetShape()
2355
2356```
2357OH_AI_API void OH_AI_TensorSetShape (OH_AI_TensorHandle tensor, const int64_t * shape, size_t shape_num )
2358```
2359
2360**Description**
2361
2362Sets the tensor shape.
2363
2364**Since**: 9
2365
2366**Parameters**
2367
2368| Name| Description|
2369| -------- | -------- |
2370| tensor | Handle of the tensor object.|
2371| shape | Shape array.|
2372| shape_num | Length of the tensor shape array.|
2373
2374
2375### OH_AI_TensorSetUserData()
2376
2377```
2378OH_AI_API OH_AI_Status OH_AI_TensorSetUserData (OH_AI_TensorHandle tensor, void * data, size_t data_size )
2379```
2380
2381**Description**
2382
2383Sets the tensor as the user data. This function allows you to reuse user data as the model input, which helps to reduce data copy by one time. > **NOTE**<br>The user data is type of external data for the tensor and is not automatically released when the tensor is destroyed. The caller needs to release the data separately. In addition, the caller must ensure that the user data is valid during use of the tensor.
2384
2385**Since**: 10
2386
2387**Parameters**
2388
2389| Name| Description|
2390| -------- | -------- |
2391| tensor | Handle of the tensor object.|
2392| data | Start address of user data.|
2393| data_size | Length of the user data length.|
2394
2395**Returns**
2396
2397Execution status code. The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
2398
2399
2400### OH_AI_TrainCfgCreate()
2401
2402```
2403OH_AI_API OH_AI_TrainCfgHandle OH_AI_TrainCfgCreate ()
2404```
2405
2406**Description**
2407
2408Creates the pointer to the training configuration object. This API is used only for on-device training.
2409
2410**Since**: 11
2411
2412**Returns**
2413
2414Pointer to the training configuration object.
2415
2416
2417### OH_AI_TrainCfgDestroy()
2418
2419```
2420OH_AI_API void OH_AI_TrainCfgDestroy (OH_AI_TrainCfgHandle * train_cfg)
2421```
2422
2423**Description**
2424
2425Destroys the pointer to the training configuration object. This API is used only for on-device training.
2426
2427**Since**: 11
2428
2429**Parameters**
2430
2431| Name| Description|
2432| -------- | -------- |
2433| train_cfg | Pointer to the training configuration object.|
2434
2435
2436### OH_AI_TrainCfgGetLossName()
2437
2438```
2439OH_AI_API char** OH_AI_TrainCfgGetLossName (OH_AI_TrainCfgHandle train_cfg, size_t * num )
2440```
2441
2442**Description**
2443
2444Obtains the list of loss functions, which are used only for on-device training.
2445
2446**Since**: 11
2447
2448**Parameters**
2449
2450| Name| Description|
2451| -------- | -------- |
2452| train_cfg | Pointer to the training configuration object.|
2453| num | Number of loss functions.|
2454
2455**Returns**
2456
2457List of loss functions.
2458
2459
2460### OH_AI_TrainCfgGetOptimizationLevel()
2461
2462```
2463OH_AI_API OH_AI_OptimizationLevel OH_AI_TrainCfgGetOptimizationLevel (OH_AI_TrainCfgHandle train_cfg)
2464```
2465
2466**Description**
2467
2468Obtains the optimization level of the training configuration object. This API is used only for on-device training.
2469
2470**Since**: 11
2471
2472**Parameters**
2473
2474| Name| Description|
2475| -------- | -------- |
2476| train_cfg | Pointer to the training configuration object.|
2477
2478**Returns**
2479
2480Optimization level.
2481
2482
2483### OH_AI_TrainCfgSetLossName()
2484
2485```
2486OH_AI_API void OH_AI_TrainCfgSetLossName (OH_AI_TrainCfgHandle train_cfg, const char ** loss_name, size_t num )
2487```
2488
2489**Description**
2490
2491Sets the list of loss functions, which are used only for on-device training.
2492
2493**Since**: 11
2494
2495**Parameters**
2496
2497| Name| Description|
2498| -------- | -------- |
2499| train_cfg | Pointer to the training configuration object.|
2500| loss_name | List of loss functions.|
2501| num | Number of loss functions.|
2502
2503
2504### OH_AI_TrainCfgSetOptimizationLevel()
2505
2506```
2507OH_AI_API void OH_AI_TrainCfgSetOptimizationLevel (OH_AI_TrainCfgHandle train_cfg, OH_AI_OptimizationLevel level )
2508```
2509
2510**Description**
2511
2512Sets the optimization level of the training configuration object. This API is used only for on-device training.
2513
2514**Since**: 11
2515
2516**Parameters**
2517
2518| Name| Description|
2519| -------- | -------- |
2520| train_cfg | Pointer to the training configuration object.|
2521| level | Optimization level.|
2522
2523
2524### OH_AI_TrainModelBuild()
2525
2526```
2527OH_AI_API OH_AI_Status OH_AI_TrainModelBuild (OH_AI_ModelHandle model, const void * model_data, size_t data_size, OH_AI_ModelType model_type, const OH_AI_ContextHandle model_context, const OH_AI_TrainCfgHandle train_cfg )
2528```
2529
2530**Description**
2531
2532Loads a training model from the memory buffer and compiles the model to a state ready for running on the device. This API is used only for on-device training.
2533
2534**Since**: 11
2535
2536**Parameters**
2537
2538| Name| Description|
2539| -------- | -------- |
2540| model | Pointer to the model object.|
2541| model_data | Pointer to the buffer for storing the model file to be read.|
2542| data_size | Buffer size.|
2543| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
2544| model_context | Model runtime context, which is specified by [OH_AI_ContextHandle](#oh_ai_contexthandle).|
2545| train_cfg | Pointer to the training configuration object.|
2546
2547**Returns**
2548
2549Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
2550
2551
2552### OH_AI_TrainModelBuildFromFile()
2553
2554```
2555OH_AI_API OH_AI_Status OH_AI_TrainModelBuildFromFile (OH_AI_ModelHandle model, const char * model_path, OH_AI_ModelType model_type, const OH_AI_ContextHandle model_context, const OH_AI_TrainCfgHandle train_cfg )
2556```
2557
2558**Description**
2559
2560Loads the training model from the specified path and compiles the model to a state ready for running on the device. This API is used only for on-device training.
2561
2562**Since**: 11
2563
2564**Parameters**
2565
2566| Name| Description|
2567| -------- | -------- |
2568| model | Pointer to the model object.|
2569| model_path | Path of the model file.|
2570| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
2571| model_context | Model runtime context, which is specified by [OH_AI_ContextHandle](#oh_ai_contexthandle).|
2572| train_cfg | Pointer to the training configuration object.|
2573
2574**Returns**
2575
2576Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_Status::OH_AI_STATUS_SUCCESS** indicates that the operation is successful.
2577