• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# MindSpore
2
3
4## Overview
5
6Provides APIs related to MindSpore Lite model inference. The APIs in this module are non-thread-safe.
7
8**System capability**: SystemCapability.Ai.MindSpore
9
10**Since**: 9
11
12## Summary
13
14
15### Files
16
17| Name| Description|
18| -------- | -------- |
19| [context.h](context_8h.md) | Provides **Context** APIs for configuring runtime information.<br>File to include: &lt;mindspore/context.h&gt;<br>Library: libmindspore_lite_ndk.so|
20| [data_type.h](data__type_8h.md) | Declares tensor data types.<br>File to include: &lt;mindspore/data_type.h&gt;<br>Library: libmindspore_lite_ndk.so|
21| [format.h](format_8h.md) | Declares tensor data formats.<br>File to include: &lt;mindspore/format.h&gt;<br>Library: libmindspore_lite_ndk.so|
22| [model.h](model_8h.md) | Provides model-related APIs for model creation and inference.<br>File to include: &lt;mindspore/model.h&gt;<br>Library: libmindspore_lite_ndk.so|
23| [status.h](status_8h.md) | Provides the status codes of MindSpore Lite.<br>File to include: &lt;mindspore/status.h&gt;<br>Library: libmindspore_lite_ndk.so|
24| [tensor.h](tensor_8h.md) | Provides APIs for creating and modifying tensor information.<br>File to include: &lt;mindspore/tensor.h&gt;<br>Library: libmindspore_lite_ndk.so|
25| [types.h](types_8h.md) | Provides the model file types and device types supported by MindSpore Lite.<br>File to include: &lt;mindspore/types.h&gt;<br>Library: libmindspore_lite_ndk.so|
26
27
28### Structs
29
30| Name| Description|
31| -------- | -------- |
32| [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) | Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length.|
33| [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) | Defines dimension information. The maximum dimension is set by **OH_AI_MAX_SHAPE_NUM**.|
34| [OH_AI_CallBackParam](_o_h___a_i___call_back_param.md) | Defines the operator information passed in a callback.|
35
36
37### Macro Definition
38
39| Name| Description|
40| -------- | -------- |
41| [OH_AI_MAX_SHAPE_NUM](#oh_ai_max_shape_num) 32 | Defines dimension information. The maximum dimension is set by **MS_MAX_SHAPE_NUM**.|
42
43
44### Types
45
46| Name| Description|
47| -------- | -------- |
48| [OH_AI_ContextHandle](#oh_ai_contexthandle) | Defines the pointer to the MindSpore context. |
49| [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) | Defines the pointer to the MindSpore device information.|
50| [OH_AI_DataType](#oh_ai_datatype) | Declares data types supported by MSTensor.|
51| [OH_AI_Format](#oh_ai_format) | Declares data formats supported by MSTensor.|
52| [OH_AI_ModelHandle](#oh_ai_modelhandle) | Defines the pointer to a model object.|
53| [OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) | Defines the pointer to a training configuration object.|
54| [OH_AI_TensorHandleArray](#oh_ai_tensorhandlearray) | Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length.|
55| [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) | Defines dimension information. The maximum dimension is set by **OH_AI_MAX_SHAPE_NUM**.|
56| [OH_AI_CallBackParam](#oh_ai_callbackparam) | Defines the operator information passed in a callback.|
57| [OH_AI_KernelCallBack](#oh_ai_kernelcallback)) (const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) inputs, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) outputs, const [OH_AI_CallBackParam](_o_h___a_i___call_back_param.md) kernel_Info) | Defines the pointer to a callback.|
58| [OH_AI_Status](#oh_ai_status) | Defines MindSpore status codes.|
59| [OH_AI_TensorHandle](#oh_ai_tensorhandle) | Defines the handle of a tensor object.|
60| [OH_AI_ModelType](#oh_ai_modeltype) | Defines model file types.|
61| [OH_AI_DeviceType](#oh_ai_devicetype) | Defines the supported device types.|
62| [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype) | Defines NNRt device types.|
63| [OH_AI_PerformanceMode](#oh_ai_performancemode) | Defines performance modes of the NNRt device.|
64| [OH_AI_Priority](#oh_ai_priority) | Defines NNRt inference task priorities.|
65| [OH_AI_OptimizationLevel](#oh_ai_optimizationlevel) | Defines training optimization levels.|
66| [OH_AI_QuantizationType](#oh_ai_quantizationtype) | Defines quantization types.|
67| [NNRTDeviceDesc](#nnrtdevicedesc) | Defines NNRt device information, including the device ID and device name.|
68| [OH_AI_AllocatorHandle](#oh_ai_allocatorhandle) | Defines the handle of the memory allocator.|
69
70
71### Enums
72
73| Name| Description|
74| -------- | -------- |
75| [OH_AI_DataType](#oh_ai_datatype) {<br>OH_AI_DATATYPE_UNKNOWN = 0, OH_AI_DATATYPE_OBJECTTYPE_STRING = 12, OH_AI_DATATYPE_OBJECTTYPE_LIST = 13, OH_AI_DATATYPE_OBJECTTYPE_TUPLE = 14,<br>OH_AI_DATATYPE_OBJECTTYPE_TENSOR = 17, OH_AI_DATATYPE_NUMBERTYPE_BEGIN = 29, OH_AI_DATATYPE_NUMBERTYPE_BOOL = 30, OH_AI_DATATYPE_NUMBERTYPE_INT8 = 32,<br>OH_AI_DATATYPE_NUMBERTYPE_INT16 = 33, OH_AI_DATATYPE_NUMBERTYPE_INT32 = 34, OH_AI_DATATYPE_NUMBERTYPE_INT64 = 35, OH_AI_DATATYPE_NUMBERTYPE_UINT8 = 37,<br>OH_AI_DATATYPE_NUMBERTYPE_UINT16 = 38, OH_AI_DATATYPE_NUMBERTYPE_UINT32 = 39, OH_AI_DATATYPE_NUMBERTYPE_UINT64 = 40, OH_AI_DATATYPE_NUMBERTYPE_FLOAT16 = 42,<br>OH_AI_DATATYPE_NUMBERTYPE_FLOAT32 = 43, OH_AI_DATATYPE_NUMBERTYPE_FLOAT64 = 44, OH_AI_DATATYPE_NUMBERTYPE_END = 46, OH_AI_DataTypeInvalid = INT32_MAX<br>} | Declares data types supported by MSTensor.|
76| [OH_AI_Format](#oh_ai_format) {<br>OH_AI_FORMAT_NCHW = 0, OH_AI_FORMAT_NHWC = 1, OH_AI_FORMAT_NHWC4 = 2, OH_AI_FORMAT_HWKC = 3,<br>OH_AI_FORMAT_HWCK = 4, OH_AI_FORMAT_KCHW = 5, OH_AI_FORMAT_CKHW = 6, OH_AI_FORMAT_KHWC = 7,<br>OH_AI_FORMAT_CHWK = 8, OH_AI_FORMAT_HW = 9, OH_AI_FORMAT_HW4 = 10, OH_AI_FORMAT_NC = 11,<br>OH_AI_FORMAT_NC4 = 12, OH_AI_FORMAT_NC4HW4 = 13, OH_AI_FORMAT_NCDHW = 15, OH_AI_FORMAT_NWC = 16,<br>OH_AI_FORMAT_NCW = 17<br>} | Declares data formats supported by MSTensor.|
77| [OH_AI_CompCode](#oh_ai_compcode) { <br>OH_AI_COMPCODE_CORE = 0x00000000u, <br>OH_AI_COMPCODE_MD = 0x10000000u, <br>OH_AI_COMPCODE_ME = 0x20000000u, <br>OH_AI_COMPCODE_MC = 0x30000000u, <br>OH_AI_COMPCODE_LITE = 0xF0000000u<br> } | Defines MindSpore component codes. |
78| [OH_AI_Status](#oh_ai_status) {<br>OH_AI_STATUS_SUCCESS = 0, OH_AI_STATUS_CORE_FAILED = OH_AI_COMPCODE_CORE \| 0x1, OH_AI_STATUS_LITE_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -1), OH_AI_STATUS_LITE_NULLPTR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -2),<br>OH_AI_STATUS_LITE_PARAM_INVALID = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -3), OH_AI_STATUS_LITE_NO_CHANGE = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -4), OH_AI_STATUS_LITE_SUCCESS_EXIT = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -5), OH_AI_STATUS_LITE_MEMORY_FAILED = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -6),<br>OH_AI_STATUS_LITE_NOT_SUPPORT = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -7), OH_AI_STATUS_LITE_THREADPOOL_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -8), OH_AI_STATUS_LITE_UNINITIALIZED_OBJ = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -9), OH_AI_STATUS_LITE_OUT_OF_TENSOR_RANGE = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -100),<br>OH_AI_STATUS_LITE_INPUT_TENSOR_ERROR, OH_AI_STATUS_LITE_REENTRANT_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -102), OH_AI_STATUS_LITE_GRAPH_FILE_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -200), OH_AI_STATUS_LITE_NOT_FIND_OP = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -300),<br>OH_AI_STATUS_LITE_INVALID_OP_NAME = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -301), OH_AI_STATUS_LITE_INVALID_OP_ATTR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -302), OH_AI_STATUS_LITE_OP_EXECUTE_FAILURE, OH_AI_STATUS_LITE_FORMAT_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -400),<br>OH_AI_STATUS_LITE_INFER_ERROR = OH_AI_COMPCODE_LITE \| (0x0FFFFFFF &amp; -500), OH_AI_STATUS_LITE_INFER_INVALID, OH_AI_STATUS_LITE_INPUT_PARAM_INVALID<br>} | Defines MindSpore status codes.|
79| [OH_AI_ModelType](#oh_ai_modeltype) { OH_AI_MODELTYPE_MINDIR = 0, OH_AI_MODELTYPE_INVALID = 0xFFFFFFFF } | Defines model file types.|
80| [OH_AI_DeviceType](#oh_ai_devicetype) {<br>OH_AI_DEVICETYPE_CPU = 0, OH_AI_DEVICETYPE_GPU, OH_AI_DEVICETYPE_KIRIN_NPU, OH_AI_DEVICETYPE_NNRT = 60,<br>OH_AI_DEVICETYPE_INVALID = 100<br>} | Defines the supported device types.|
81| [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype) { OH_AI_NNRTDEVICE_OTHERS = 0, OH_AI_NNRTDEVICE_CPU = 1, OH_AI_NNRTDEVICE_GPU = 2, OH_AI_NNRTDEVICE_ACCELERATOR = 3 } | Defines NNRt device types.|
82| [OH_AI_PerformanceMode](#oh_ai_performancemode) {<br>OH_AI_PERFORMANCE_NONE = 0, OH_AI_PERFORMANCE_LOW = 1, OH_AI_PERFORMANCE_MEDIUM = 2, OH_AI_PERFORMANCE_HIGH = 3,<br>OH_AI_PERFORMANCE_EXTREME = 4<br>} | Defines performance modes of the NNRt device.|
83| [OH_AI_Priority](#oh_ai_priority) { OH_AI_PRIORITY_NONE = 0, OH_AI_PRIORITY_LOW = 1, OH_AI_PRIORITY_MEDIUM = 2, OH_AI_PRIORITY_HIGH = 3 } | Defines NNRt inference task priorities.|
84| [OH_AI_OptimizationLevel](#oh_ai_optimizationlevel) {<br>OH_AI_KO0 = 0, OH_AI_KO2 = 2, OH_AI_KO3 = 3, OH_AI_KAUTO = 4,<br>OH_AI_KOPTIMIZATIONTYPE = 0xFFFFFFFF<br>} | Defines training optimization levels.|
85| [OH_AI_QuantizationType](#oh_ai_quantizationtype) { OH_AI_NO_QUANT = 0, OH_AI_WEIGHT_QUANT = 1, OH_AI_FULL_QUANT = 2, OH_AI_UNKNOWN_QUANT_TYPE = 0xFFFFFFFF } | Defines quantization types.|
86
87
88### Functions
89
90| Name| Description|
91| -------- | -------- |
92| [OH_AI_ContextCreate](#oh_ai_contextcreate) () | Creates a context object. This API must be used together with [OH_AI_ContextDestroy](#oh_ai_contextdestroy).|
93| [OH_AI_ContextDestroy](#oh_ai_contextdestroy) ([OH_AI_ContextHandle](#oh_ai_contexthandle) \*context) | Destroys a context object.|
94| [OH_AI_ContextSetThreadNum](#oh_ai_contextsetthreadnum) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, int32_t thread_num) | Sets the number of runtime threads.|
95| [OH_AI_ContextGetThreadNum](#oh_ai_contextgetthreadnum) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Obtains the number of threads.|
96| [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, int mode) | Sets the affinity mode for binding runtime threads to CPU cores, which are classified into large, medium, and small cores based on the CPU frequency. You only need to bind the large or medium cores, but not small cores.|
97| [OH_AI_ContextGetThreadAffinityMode](#oh_ai_contextgetthreadaffinitymode) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Obtains the affinity mode for binding runtime threads to CPU cores.|
98| [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, const int32_t \*core_list, size_t core_num) | Sets the list of CPU cores bound to a runtime thread.|
99| [OH_AI_ContextGetThreadAffinityCoreList](#oh_ai_contextgetthreadaffinitycorelist) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context, size_t \*core_num) | Obtains the list of bound CPU cores.|
100| [OH_AI_ContextSetEnableParallel](#oh_ai_contextsetenableparallel) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, bool is_parallel) | Sets whether to enable parallelism between operators. The setting is ineffective because the feature of this API is not yet available.|
101| [OH_AI_ContextGetEnableParallel](#oh_ai_contextgetenableparallel) (const [OH_AI_ContextHandle](#oh_ai_contexthandle) context) | Checks whether parallelism between operators is supported.|
102| [OH_AI_ContextAddDeviceInfo](#oh_ai_contextadddeviceinfo) ([OH_AI_ContextHandle](#oh_ai_contexthandle) context, [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Attaches the custom device information to the inference context.|
103| [OH_AI_DeviceInfoCreate](#oh_ai_deviceinfocreate) ([OH_AI_DeviceType](#oh_ai_devicetype) device_type) | Creates a device information object.|
104| [OH_AI_DeviceInfoDestroy](#oh_ai_deviceinfodestroy) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) \*device_info) | Destroys a device information object. Note: After the device information instance is added to the context, the caller does not need to destroy it manually.|
105| [OH_AI_DeviceInfoSetProvider](#oh_ai_deviceinfosetprovider) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*provider) | Sets the name of the provider.|
106| [OH_AI_DeviceInfoGetProvider](#oh_ai_deviceinfogetprovider) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the provider name.|
107| [OH_AI_DeviceInfoSetProviderDevice](#oh_ai_deviceinfosetproviderdevice) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*device) | Sets the name of a provider device.|
108| [OH_AI_DeviceInfoGetProviderDevice](#oh_ai_deviceinfogetproviderdevice) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the name of a provider device.|
109| [OH_AI_DeviceInfoGetDeviceType](#oh_ai_deviceinfogetdevicetype) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the device type.|
110| [OH_AI_DeviceInfoSetEnableFP16](#oh_ai_deviceinfosetenablefp16) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, bool is_fp16) | Sets whether to enable float16 inference. This function is available only for CPU and GPU devices.|
111| [OH_AI_DeviceInfoGetEnableFP16](#oh_ai_deviceinfogetenablefp16) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Checks whether float16 inference is enabled. This function is available only for CPU and GPU devices.|
112| [OH_AI_DeviceInfoSetFrequency](#oh_ai_deviceinfosetfrequency) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, int frequency) | Sets the NPU frequency type. This function is available only for NPU devices.|
113| [OH_AI_DeviceInfoGetFrequency](#oh_ai_deviceinfogetfrequency) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the NPU frequency type. This function is available only for NPU devices.|
114| [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs) (size_t \*num) | Obtains the descriptions of all NNRt devices in the system.|
115| [OH_AI_GetElementOfNNRTDeviceDescs](#oh_ai_getelementofnnrtdevicedescs) ([NNRTDeviceDesc](#nnrtdevicedesc) \*descs, size_t index) | Obtains the pointer to an element in the NNRt device description array.|
116| [OH_AI_DestroyAllNNRTDeviceDescs](#oh_ai_destroyallnnrtdevicedescs) ([NNRTDeviceDesc](#nnrtdevicedesc) \*\*desc) | Destroys the NNRt device description array obtained by [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs).|
117| [OH_AI_GetDeviceIdFromNNRTDeviceDesc](#oh_ai_getdeviceidfromnnrtdevicedesc) (const [NNRtDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device ID from the specified NNRt device description. Note that this ID is valid only for NNRt devices.|
118| [OH_AI_GetNameFromNNRTDeviceDesc](#oh_ai_getnamefromnnrtdevicedesc) (const [NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device name from the specified NNRt device description.|
119| [OH_AI_GetTypeFromNNRtDeviceDesc](#oh_ai_gettypefromnnrtdevicedesc) (const [NNRTDeviceDesc](#nnrtdevicedesc) \*desc) | Obtains the NNRt device type from the specified NNRt device description.|
120| [OH_AI_CreateNNRTDeviceInfoByName](#oh_ai_creatennrtdeviceinfobyname) (const char \*name) | Searches for the NNRt device with the specified name and creates the NNRt device information based on the information about the first found NNRt device.|
121| [OH_AI_CreateNNRTDeviceInfoByType](#oh_ai_creatennrtdeviceinfobytype) ([OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype) type) | Searches for the NNRt device with the specified type and creates the NNRt device information based on the information about the first found NNRt device.|
122| [OH_AI_DeviceInfoSetDeviceId](#oh_ai_deviceinfosetdeviceid) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, size_t device_id) | Sets the NNRt device ID. This function is available only for NNRt devices.|
123| [OH_AI_DeviceInfoGetDeviceId](#oh_ai_deviceinfogetdeviceid) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the NNRt device ID. This function is available only for NNRt devices.|
124| [OH_AI_DeviceInfoSetPerformanceMode](#oh_ai_deviceinfosetperformancemode) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, [OH_AI_PerformanceMode](#oh_ai_performancemode) mode) | Sets the NNRt performance mode. This function is available only for NNRt devices.|
125| [OH_AI_DeviceInfoGetPerformanceMode](#oh_ai_deviceinfogetperformancemode) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the NNRt performance mode. This function is available only for NNRt devices.|
126| [OH_AI_DeviceInfoSetPriority](#oh_ai_deviceinfosetpriority) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, [OH_AI_Priority](#oh_ai_priority) priority) | Sets the priority of an NNRt task. This function is available only for NNRt devices.|
127| [OH_AI_DeviceInfoGetPriority](#oh_ai_deviceinfogetpriority) (const [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info) | Obtains the priority of an NNRt task. This function is available only for NNRt devices.|
128| [OH_AI_DeviceInfoAddExtension](#oh_ai_deviceinfoaddextension) ([OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) device_info, const char \*name, const char \*value, size_t value_size) | Adds extended configuration in the form of key/value pairs to the device information. This function is available only for NNRt devices.|
129| [OH_AI_ModelCreate](#oh_ai_modelcreate) (void) | Creates a model object.|
130| [OH_AI_ModelDestroy](#oh_ai_modeldestroy) ([OH_AI_ModelHandle](#oh_ai_modelhandle) \*model) | Destroys a model object.|
131| [OH_AI_ModelBuild](#oh_ai_modelbuild) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const void \*model_data, size_t data_size, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context) | Loads and builds a MindSpore model from the memory buffer.|
132| [OH_AI_ModelBuildFromFile](#oh_ai_modelbuildfromfile) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*model_path, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context) | Loads and builds a MindSpore model from a model file.|
133| [OH_AI_ModelResize](#oh_ai_modelresize) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) inputs, [OH_AI_ShapeInfo](_o_h___a_i___shape_info.md) \*shape_infos, size_t shape_info_num) | Adjusts the input tensor shapes of a built model.|
134| [OH_AI_ModelPredict](#oh_ai_modelpredict) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) inputs, [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) \*outputs, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) before, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) after) | Performs model inference.|
135| [OH_AI_ModelGetInputs](#oh_ai_modelgetinputs) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the input tensor array structure of a model.|
136| [OH_AI_ModelGetOutputs](#oh_ai_modelgetoutputs) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the output tensor array structure of a model.|
137| [OH_AI_ModelGetInputByTensorName](#oh_ai_modelgetinputbytensorname) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*tensor_name) | Obtains the input tensor of a model by tensor name.|
138| [OH_AI_ModelGetOutputByTensorName](#oh_ai_modelgetoutputbytensorname) (const [OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*tensor_name) | Obtains the output tensor of a model by tensor name.|
139| [OH_AI_TrainCfgCreate](#oh_ai_traincfgcreate) () | Creates the pointer to the training configuration object. This API is used only for on-device training.|
140| [OH_AI_TrainCfgDestroy](#oh_ai_traincfgdestroy) ([OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) \*train_cfg) | Destroys the pointer to the training configuration object. This API is used only for on-device training.|
141| [OH_AI_TrainCfgGetLossName](#oh_ai_traincfggetlossname) ([OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg, size_t \*num) | Obtains the list of loss functions, which are used only for on-device training.|
142| [OH_AI_TrainCfgSetLossName](#oh_ai_traincfgsetlossname) ([OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg, const char \*\*loss_name, size_t num) | Sets the list of loss functions, which are used only for on-device training.|
143| [OH_AI_TrainCfgGetOptimizationLevel](#oh_ai_traincfggetoptimizationlevel) ([OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg) | Obtains the optimization level of the training configuration object. This API is used only for on-device training.|
144| [OH_AI_TrainCfgSetOptimizationLevel](#oh_ai_traincfgsetoptimizationlevel) ([OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg, [OH_AI_OptimizationLevel](#oh_ai_optimizationlevel) level) | Sets the optimization level of the training configuration object. This API is used only for on-device training.|
145| [OH_AI_TrainModelBuild](#oh_ai_trainmodelbuild) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const void \*model_data, size_t data_size, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context, const [OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg) | Loads a training model from the memory buffer and compiles the model to a state ready for running on the device. This API is used only for on-device training.|
146| [OH_AI_TrainModelBuildFromFile](#oh_ai_trainmodelbuildfromfile) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const char \*model_path, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const [OH_AI_ContextHandle](#oh_ai_contexthandle) model_context, const [OH_AI_TrainCfgHandle](#oh_ai_traincfghandle) train_cfg) | Loads the training model from the specified path and compiles the model to a state ready for running on the device. This API is used only for on-device training.|
147| [OH_AI_RunStep](#oh_ai_runstep) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) before, const [OH_AI_KernelCallBack](#oh_ai_kernelcallback) after) | Defines a single-step training model. This API is used only for on-device training.|
148| [OH_AI_ModelSetLearningRate](#oh_ai_modelsetlearningrate) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, float learning_rate) | Sets the learning rate for model training. This API is used only for on-device training.|
149| [OH_AI_ModelGetLearningRate](#oh_ai_modelgetlearningrate) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the learning rate for model training. This API is used only for on-device training.|
150| [OH_AI_ModelGetWeights](#oh_ai_modelgetweights) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains all weight tensors of a model. This API is used only for on-device training.|
151| [OH_AI_ModelUpdateWeights](#oh_ai_modelupdateweights) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, const [OH_AI_TensorHandleArray](_o_h___a_i___tensor_handle_array.md) new_weights) | Updates the weight tensors of a model. This API is used only for on-device training.|
152| [OH_AI_ModelGetTrainMode](#oh_ai_modelgettrainmode) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model) | Obtains the training mode.|
153| [OH_AI_ModelSetTrainMode](#oh_ai_modelsettrainmode) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, bool train) | Sets the training mode. This API is used only for on-device training.|
154| [OH_AI_ModelSetupVirtualBatch](#oh_ai_modelsetupvirtualbatch) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, int virtual_batch_multiplier, float lr, float momentum) | OH_AI_API [OH_AI_Status](#oh_ai_status)<br>Sets the virtual batch for training. This API is used only for on-device training.|
155| [OH_AI_ExportModel](#oh_ai_exportmodel) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const char \*model_file, [OH_AI_QuantizationType](#oh_ai_quantizationtype) quantization_type, bool export_inference_only, char \*\*output_tensor_name, size_t num) | Exports a training model. This API is used only for on-device training.|
156| [OH_AI_ExportModelBuffer](#oh_ai_exportmodelbuffer) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, [OH_AI_ModelType](#oh_ai_modeltype) model_type, void \*model_data, size_t \*data_size, [OH_AI_QuantizationType](#oh_ai_quantizationtype) quantization_type, bool export_inference_only, char \*\*output_tensor_name, size_t num) | Exports the memory cache of the training model. This API is used only for on-device training. |
157| [OH_AI_ExportWeightsCollaborateWithMicro](#oh_ai_exportweightscollaboratewithmicro) ([OH_AI_ModelHandle](#oh_ai_modelhandle) model, [OH_AI_ModelType](#oh_ai_modeltype) model_type, const char \*weight_file, bool is_inference, bool enable_fp16, char \*\*changeable_weights_name, size_t num) | Exports the weight file of the training model for micro inference. This API is used only for on-device training.|
158| [OH_AI_TensorCreate](#oh_ai_tensorcreate) (const char \*name, [OH_AI_DataType](#oh_ai_datatype) type, const int64_t \*shape, size_t shape_num, const void \*data, size_t data_len) | Creates a tensor object.|
159| [OH_AI_TensorDestroy](#oh_ai_tensordestroy) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) \*tensor) | Destroys a tensor object.|
160| [OH_AI_TensorClone](#oh_ai_tensorclone) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Clones a tensor.|
161| [OH_AI_TensorSetName](#oh_ai_tensorsetname) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, const char \*name) | Sets the tensor name.|
162| [OH_AI_TensorGetName](#oh_ai_tensorgetname) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor name.|
163| [OH_AI_TensorSetDataType](#oh_ai_tensorsetdatatype) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, [OH_AI_DataType](#oh_ai_datatype) type) | Sets the data type of a tensor.|
164| [OH_AI_TensorGetDataType](#oh_ai_tensorgetdatatype) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor type.|
165| [OH_AI_TensorSetShape](#oh_ai_tensorsetshape) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, const int64_t \*shape, size_t shape_num) | Sets the tensor shape.|
166| [OH_AI_TensorGetShape](#oh_ai_tensorgetshape) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, size_t \*shape_num) | Obtains the tensor shape.|
167| [OH_AI_TensorSetFormat](#oh_ai_tensorsetformat) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, [OH_AI_Format](#oh_ai_format) format) | Sets the tensor data format.|
168| [OH_AI_TensorGetFormat](#oh_ai_tensorgetformat) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the tensor data format.|
169| [OH_AI_TensorSetData](#oh_ai_tensorsetdata) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, void \*data) | Sets the tensor data.|
170| [OH_AI_TensorGetData](#oh_ai_tensorgetdata) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the pointer to tensor data.|
171| [OH_AI_TensorGetMutableData](#oh_ai_tensorgetmutabledata) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the pointer to variable tensor data. If the data is empty, memory will be allocated.|
172| [OH_AI_TensorGetElementNum](#oh_ai_tensorgetelementnum) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the number of tensor elements.|
173| [OH_AI_TensorGetDataSize](#oh_ai_tensorgetdatasize) (const [OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains the number of bytes of the tensor data.|
174| [OH_AI_TensorSetUserData](#oh_ai_tensorsetuserdata) ([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, void \*data, size_t data_size) | Sets the tensor as the user data. This function allows you to reuse user data as the model input, which helps to reduce data copy by one time.<br>> **NOTE**<br>The user data is type of external data for the tensor and is not automatically released when the tensor is destroyed. The caller needs to release the data separately. In addition, the caller must ensure that the user data is valid during use of the tensor.|
175| [OH_AI_TensorGetAllocator](#oh_ai_tensorgetallocator)([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor) | Obtains a memory allocator. The allocator is responsible for allocating memory for tensors.|
176| [OH_AI_TensorSetAllocator](#oh_ai_tensorsetallocator)([OH_AI_TensorHandle](#oh_ai_tensorhandle) tensor, [OH_AI_AllocatorHandle](#oh_ai_allocatorhandle) allocator) | Sets the memory allocator. The allocator is responsible for allocating memory for tensors.|
177
178
179## Macro Description
180
181
182### OH_AI_MAX_SHAPE_NUM
183
184```
185#define OH_AI_MAX_SHAPE_NUM   32
186```
187
188**Description**
189
190Maximum number of shapes. The maximum value reserved is **32**, and the maximum number currently supported is **8**.
191
192**Since**: 9
193
194
195## Type Description
196
197
198### NNRTDeviceDesc
199
200```
201typedef struct NNRTDeviceDesc NNRTDeviceDesc
202```
203
204**Description**
205
206Defines NNRt device information, including the device ID and device name.
207
208**Since**: 10
209
210### OH_AI_AllocatorHandle
211
212```
213typedef void *OH_AI_AllocatorHandle
214```
215
216**Description**
217
218Handle of the memory allocator.
219
220**Since**: 12
221
222### OH_AI_CallBackParam
223
224```
225typedef struct OH_AI_CallBackParam OH_AI_CallBackParam
226```
227
228**Description**
229
230Defines the operator information passed in a callback.
231
232**Since**: 9
233
234
235### OH_AI_ContextHandle
236
237```
238typedef void* OH_AI_ContextHandle
239```
240
241**Description**
242
243Defines the pointer to the MindSpore context.
244
245**Since**: 9
246
247
248### OH_AI_DataType
249
250```
251typedef enum OH_AI_DataType OH_AI_DataType
252```
253
254**Description**
255
256Declares data types supported by MSTensor.
257
258**Since**: 9
259
260
261### OH_AI_DeviceInfoHandle
262
263```
264typedef void* OH_AI_DeviceInfoHandle
265```
266
267**Description**
268
269Defines the pointer to the MindSpore device information.
270
271**Since**: 9
272
273
274### OH_AI_DeviceType
275
276```
277typedef enum OH_AI_DeviceType OH_AI_DeviceType
278```
279
280**Description**
281
282Defines the supported device types.
283
284**Since**: 9
285
286
287### OH_AI_Format
288
289```
290typedef enum OH_AI_Format OH_AI_Format
291```
292
293**Description**
294
295Declares data formats supported by MSTensor.
296
297**Since**: 9
298
299
300### OH_AI_KernelCallBack
301
302```
303typedef bool(* OH_AI_KernelCallBack) (const OH_AI_TensorHandleArray inputs, const OH_AI_TensorHandleArray outputs, const OH_AI_CallBackParam kernel_Info)
304```
305
306**Description**
307
308Defines the pointer to a callback.
309
310This pointer is used to set the two callback functions in [OH_AI_ModelPredict](#oh_ai_modelpredict). Each callback function must contain three parameters, where **inputs** and **outputs** indicate the input and output tensors of the operator, and **kernel_Info** indicates information about the current operator. You can use the callback functions to monitor the operator execution status, for example, operator execution time and the operator correctness.
311
312**Since**: 9
313
314
315### OH_AI_ModelHandle
316
317```
318typedef void* OH_AI_ModelHandle
319```
320
321**Description**
322
323Defines the pointer to a model object.
324
325**Since**: 9
326
327
328### OH_AI_ModelType
329
330```
331typedef enum OH_AI_ModelType OH_AI_ModelType
332```
333
334**Description**
335
336Defines model file types.
337
338**Since**: 9
339
340
341### OH_AI_NNRTDeviceType
342
343```
344typedef enum OH_AI_NNRTDeviceType OH_AI_NNRTDeviceType
345```
346
347**Description**
348
349Defines NNRt device types.
350
351**Since**: 10
352
353
354### OH_AI_PerformanceMode
355
356```
357typedef enum OH_AI_PerformanceMode OH_AI_PerformanceMode
358```
359
360**Description**
361
362Defines performance modes of the NNRt device.
363
364**Since**: 10
365
366
367### OH_AI_Priority
368
369```
370typedef enum OH_AI_Priority OH_AI_Priority
371```
372
373**Description**
374
375Defines NNRt inference task priorities.
376
377**Since**: 10
378
379
380### OH_AI_Status
381
382```
383typedef enum OH_AI_Status OH_AI_Status
384```
385
386**Description**
387
388Defines MindSpore status codes.
389
390**Since**: 9
391
392
393### OH_AI_TensorHandle
394
395```
396typedef void* OH_AI_TensorHandle
397```
398
399**Description**
400
401Handle of a tensor object.
402
403**Since**: 9
404
405
406### OH_AI_TensorHandleArray
407
408```
409typedef struct OH_AI_TensorHandleArray OH_AI_TensorHandleArray
410```
411
412**Description**
413
414Defines the tensor array structure, which is used to store the tensor array pointer and tensor array length.
415
416**Since**: 9
417
418
419### OH_AI_TrainCfgHandle
420
421```
422typedef void* OH_AI_TrainCfgHandle
423```
424
425**Description**
426
427Defines the pointer to a training configuration object.
428
429**Since**: 11
430
431
432## Enum Description
433
434
435### OH_AI_CompCode
436
437```
438enum OH_AI_CompCode
439```
440
441**Description**
442
443Defines MindSpore component codes.
444
445**Since**: 9
446
447| Value| Description|
448| -------- | -------- |
449| OH_AI_COMPCODE_CORE | MindSpore Core code.|
450| OH_AI_COMPCODE_MD   | MindSpore MindData code.|
451| OH_AI_COMPCODE_ME   | MindSpore MindExpression code.|
452| OH_AI_COMPCODE_MC   | MindSpore code.|
453| OH_AI_COMPCODE_LITE | MindSpore Lite code.|
454
455
456### OH_AI_DataType
457
458```
459enum OH_AI_DataType
460```
461
462**Description**
463
464Declares data types supported by MSTensor.
465
466**Since**: 9
467
468| Value| Description|
469| -------- | -------- |
470| OH_AI_DATATYPE_UNKNOWN | Unknown data type.|
471| OH_AI_DATATYPE_OBJECTTYPE_STRING | String data.|
472| OH_AI_DATATYPE_OBJECTTYPE_LIST | List data.|
473| OH_AI_DATATYPE_OBJECTTYPE_TUPLE | Tuple data.|
474| OH_AI_DATATYPE_OBJECTTYPE_TENSOR | TensorList data.|
475| OH_AI_DATATYPE_NUMBERTYPE_BEGIN | Beginning of the number type.|
476| OH_AI_DATATYPE_NUMBERTYPE_BOOL | Bool data.|
477| OH_AI_DATATYPE_NUMBERTYPE_INT8 | Int8 data.|
478| OH_AI_DATATYPE_NUMBERTYPE_INT16 | Int16 data.|
479| OH_AI_DATATYPE_NUMBERTYPE_INT32 | Int32 data.|
480| OH_AI_DATATYPE_NUMBERTYPE_INT64 | Int64 data.|
481| OH_AI_DATATYPE_NUMBERTYPE_UINT8 | UInt8 data.|
482| OH_AI_DATATYPE_NUMBERTYPE_UINT16 | UInt16 data.|
483| OH_AI_DATATYPE_NUMBERTYPE_UINT32 | UInt32 data.|
484| OH_AI_DATATYPE_NUMBERTYPE_UINT64 | UInt64 data.|
485| OH_AI_DATATYPE_NUMBERTYPE_FLOAT16 | Float16 data.|
486| OH_AI_DATATYPE_NUMBERTYPE_FLOAT32 | Float32 data.|
487| OH_AI_DATATYPE_NUMBERTYPE_FLOAT64 | Float64 data.|
488| OH_AI_DATATYPE_NUMBERTYPE_END | End of the number type.|
489| OH_AI_DataTypeInvalid | Invalid data type.|
490
491
492### OH_AI_DeviceType
493
494```
495enum OH_AI_DeviceType
496```
497
498**Description**
499
500Defines the supported device types.
501
502**Since**: 9
503
504| Value| Description|
505| -------- | -------- |
506| OH_AI_DEVICETYPE_CPU | Device type: CPU|
507| OH_AI_DEVICETYPE_GPU | Device type: GPU<br>This configuration is open for upstream open source projects and is not supported by OpenHarmony.|
508| OH_AI_DEVICETYPE_KIRIN_NPU | Device type: Kirin NPU<br>This configuration is open for upstream open source projects and is not supported by OpenHarmony.<br>To use KIRIN_NPU, set **OH_AI_DEVICETYPE_NNRT**.|
509| OH_AI_DEVICETYPE_NNRT | NNRt, a cross-chip inference and computing runtime oriented to the AI field.<br>OHOS device range: [60, 80)|
510| OH_AI_DEVICETYPE_INVALID | Invalid device type.|
511
512
513### OH_AI_Format
514
515```
516enum OH_AI_Format
517```
518
519**Description**
520
521Declares data formats supported by MSTensor.
522
523**Since**: 9
524
525| Value| Description|
526| -------- | -------- |
527| OH_AI_FORMAT_NCHW | Tensor data is stored in the sequence of batch number N, channel C, height H, and width W.|
528| OH_AI_FORMAT_NHWC | Tensor data is stored in the sequence of batch number N, height H, width W, and channel C.|
529| OH_AI_FORMAT_NHWC4 | Tensor data is stored in the sequence of batch number N, height H, width W, and channel C. The C axis is 4-byte aligned.|
530| OH_AI_FORMAT_HWKC | Tensor data is stored in the sequence of height H, width W, core count K, and channel C.|
531| OH_AI_FORMAT_HWCK | Tensor data is stored in the sequence of height H, width W, channel C, and core count K.|
532| OH_AI_FORMAT_KCHW | Tensor data is stored in the sequence of core count K, channel C, height H, and width W.|
533| OH_AI_FORMAT_CKHW | Tensor data is stored in the sequence of channel C, core count K, height H, and width W.|
534| OH_AI_FORMAT_KHWC | Tensor data is stored in the sequence of core count K, height H, width W, and channel C.|
535| OH_AI_FORMAT_CHWK | Tensor data is stored in the sequence of channel C, height H, width W, and core count K.|
536| OH_AI_FORMAT_HW | Tensor data is stored in the sequence of height H and width W.|
537| OH_AI_FORMAT_HW4 | Tensor data is stored in the sequence of height H and width W. The W axis is 4-byte aligned.|
538| OH_AI_FORMAT_NC | Tensor data is stored in the sequence of batch number N and channel C.|
539| OH_AI_FORMAT_NC4 | Tensor data is stored in the sequence of batch number N and channel C. The C axis is 4-byte aligned.|
540| OH_AI_FORMAT_NC4HW4 | Tensor data is stored in the sequence of batch number N, channel C, height H, and width W. The C axis and W axis are 4-byte aligned.|
541| OH_AI_FORMAT_NCDHW | Tensor data is stored in the sequence of batch number N, channel C, depth D, height H, and width W.|
542| OH_AI_FORMAT_NWC | Tensor data is stored in the sequence of batch number N, width W, and channel C.|
543| OH_AI_FORMAT_NCW | Tensor data is stored in the sequence of batch number N, channel C, and width W.|
544
545
546### OH_AI_ModelType
547
548```
549enum OH_AI_ModelType
550```
551
552**Description**
553
554Defines model file types.
555
556**Since**: 9
557
558| Value| Description|
559| -------- | -------- |
560| OH_AI_MODELTYPE_MINDIR | Model type of MindIR. The extension of the model file name is **.ms**.|
561| OH_AI_MODELTYPE_INVALID | Invalid model type.|
562
563
564### OH_AI_NNRTDeviceType
565
566```
567enum OH_AI_NNRTDeviceType
568```
569
570**Description**
571
572Defines NNRt device types.
573
574**Since**: 10
575
576| Value| Description|
577| -------- | -------- |
578| OH_AI_NNRTDEVICE_OTHERS | Others (any device type except the following three types).|
579| OH_AI_NNRTDEVICE_CPU | CPU.|
580| OH_AI_NNRTDEVICE_GPU | GPU.|
581| OH_AI_NNRTDEVICE_ACCELERATOR | Specific acceleration device.|
582
583
584### OH_AI_OptimizationLevel
585
586```
587enum OH_AI_OptimizationLevel
588```
589
590**Description**
591
592Defines training optimization levels.
593
594**Since**
595
596**11**
597
598| Value| Description|
599| -------- | -------- |
600| OH_AI_KO0 | No optimization level.|
601| OH_AI_KO2 | Converts the precision type of the network to float16 and keeps the precision type of the batch normalization layer and loss function as float32.|
602| OH_AI_KO3 | Converts the precision type of the network (including the batch normalization layer) to float16.|
603| OH_AI_KAUTO | Selects an optimization level based on the device.|
604| OH_AI_KOPTIMIZATIONTYPE | Invalid optimization level.|
605
606
607### OH_AI_PerformanceMode
608
609```
610enum OH_AI_PerformanceMode
611```
612
613**Description**
614
615Defines performance modes of the NNRt device.
616
617**Since**: 10
618
619| Value| Description|
620| -------- | -------- |
621| OH_AI_PERFORMANCE_NONE | No special settings.|
622| OH_AI_PERFORMANCE_LOW | Low power consumption.|
623| OH_AI_PERFORMANCE_MEDIUM | Power consumption and performance balancing.|
624| OH_AI_PERFORMANCE_HIGH | High performance.|
625| OH_AI_PERFORMANCE_EXTREME | Ultimate performance.|
626
627
628### OH_AI_Priority
629
630```
631enum OH_AI_Priority
632```
633
634**Description**
635
636Defines NNRt inference task priorities.
637
638**Since**: 10
639
640| Value| Description|
641| -------- | -------- |
642| OH_AI_PRIORITY_NONE | No priority preference.|
643| OH_AI_PRIORITY_LOW | Low priority.|
644| OH_AI_PRIORITY_MEDIUM | Medium priority.|
645| OH_AI_PRIORITY_HIGH | High priority.|
646
647
648### OH_AI_QuantizationType
649
650```
651enum OH_AI_QuantizationType
652```
653
654**Description**
655
656Defines quantization types.
657
658**Since**
659
660**11**
661
662| Value| Description|
663| -------- | -------- |
664| OH_AI_NO_QUANT | No quantification.|
665| OH_AI_WEIGHT_QUANT | Weight quantization.|
666| OH_AI_FULL_QUANT | Full quantization.|
667| OH_AI_UNKNOWN_QUANT_TYPE | Invalid quantization type.|
668
669
670### OH_AI_Status
671
672```
673enum OH_AI_Status
674```
675
676**Description**
677
678Defines MindSpore status codes.
679
680**Since**: 9
681
682| Value| Description|
683| -------- | -------- |
684| OH_AI_STATUS_SUCCESS | Success.|
685| OH_AI_STATUS_CORE_FAILED | MindSpore Core failure.|
686| OH_AI_STATUS_LITE_ERROR | MindSpore Lite error.|
687| OH_AI_STATUS_LITE_NULLPTR | MindSpore Lite null pointer.|
688| OH_AI_STATUS_LITE_PARAM_INVALID | MindSpore Lite invalid parameters.|
689| OH_AI_STATUS_LITE_NO_CHANGE | MindSpore Lite no change.|
690| OH_AI_STATUS_LITE_SUCCESS_EXIT | MindSpore Lite exit without errors.|
691| OH_AI_STATUS_LITE_MEMORY_FAILED | MindSpore Lite memory allocation failure.|
692| OH_AI_STATUS_LITE_NOT_SUPPORT | MindSpore Lite function not supported.|
693| OH_AI_STATUS_LITE_THREADPOOL_ERROR | MindSpore Lite thread pool error.|
694| OH_AI_STATUS_LITE_UNINITIALIZED_OBJ | MindSpore Lite uninitialized.|
695| OH_AI_STATUS_LITE_OUT_OF_TENSOR_RANGE | MindSpore Lite tensor overflow.|
696| OH_AI_STATUS_LITE_INPUT_TENSOR_ERROR | MindSpore Lite input tensor error.|
697| OH_AI_STATUS_LITE_REENTRANT_ERROR | MindSpore Lite reentry error.|
698| OH_AI_STATUS_LITE_GRAPH_FILE_ERROR | MindSpore Lite file error.|
699| OH_AI_STATUS_LITE_NOT_FIND_OP | MindSpore Lite operator not found.|
700| OH_AI_STATUS_LITE_INVALID_OP_NAME | MindSpore Lite invalid operators.|
701| OH_AI_STATUS_LITE_INVALID_OP_ATTR | MindSpore Lite invalid operator hyperparameters.|
702| OH_AI_STATUS_LITE_OP_EXECUTE_FAILURE | MindSpore Lite operator execution failure.|
703| OH_AI_STATUS_LITE_FORMAT_ERROR | MindSpore Lite tensor format error.|
704| OH_AI_STATUS_LITE_INFER_ERROR | MindSpore Lite shape inference error.|
705| OH_AI_STATUS_LITE_INFER_INVALID | MindSpore Lite invalid shape inference.|
706| OH_AI_STATUS_LITE_INPUT_PARAM_INVALID | MindSpore Lite invalid input parameters.|
707
708
709## Function Description
710
711
712### OH_AI_ContextAddDeviceInfo()
713
714```
715OH_AI_API void OH_AI_ContextAddDeviceInfo (OH_AI_ContextHandle context, OH_AI_DeviceInfoHandle device_info )
716```
717
718**Description**
719
720Attaches the custom device information to the inference context.
721
722**Since**: 9
723
724**Parameters**
725
726| Name| Description|
727| -------- | -------- |
728| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
729| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
730
731
732### OH_AI_ContextCreate()
733
734```
735OH_AI_API OH_AI_ContextHandle OH_AI_ContextCreate ()
736```
737
738**Description**
739
740Creates a context object. This API must be used together with [OH_AI_ContextDestroy](#oh_ai_contextdestroy).
741
742**Since**: 9
743
744**Returns**
745
746[OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.
747
748
749### OH_AI_ContextDestroy()
750
751```
752OH_AI_API void OH_AI_ContextDestroy (OH_AI_ContextHandle * context)
753```
754
755**Description**
756
757Destroys a context object.
758
759**Since**: 9
760
761**Parameters**
762
763| Name| Description|
764| -------- | -------- |
765| context | Level-2 pointer to [OH_AI_ContextHandle](#oh_ai_contexthandle). After the context is destroyed, the pointer is set to null. |
766
767
768### OH_AI_ContextGetEnableParallel()
769
770```
771OH_AI_API bool OH_AI_ContextGetEnableParallel (const OH_AI_ContextHandle context)
772```
773
774**Description**
775
776Checks whether parallelism between operators is supported.
777
778**Since**: 9
779
780**Parameters**
781
782| Name| Description|
783| -------- | -------- |
784| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
785
786**Returns**
787
788Whether parallelism between operators is supported. The value **true** means that parallelism between operators is supported, and the value **false** means the opposite.
789
790
791### OH_AI_ContextGetThreadAffinityCoreList()
792
793```
794OH_AI_API const int32_t* OH_AI_ContextGetThreadAffinityCoreList (const OH_AI_ContextHandle context, size_t * core_num )
795```
796
797**Description**
798
799Obtains the list of bound CPU cores.
800
801**Since**: 9
802
803**Parameters**
804
805| Name| Description|
806| -------- | -------- |
807| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
808| core_num | Number of CPU cores.|
809
810**Returns**
811
812CPU core binding list. This list is managed by [OH_AI_ContextHandle](#oh_ai_contexthandle). The caller does not need to destroy it manually.
813
814
815### OH_AI_ContextGetThreadAffinityMode()
816
817```
818OH_AI_API int OH_AI_ContextGetThreadAffinityMode (const OH_AI_ContextHandle context)
819```
820
821**Description**
822
823Obtains the affinity mode for binding runtime threads to CPU cores.
824
825**Since**: 9
826
827**Parameters**
828
829| Name| Description|
830| -------- | -------- |
831| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
832
833**Returns**
834
835Affinity mode. **0**: no affinities; **1**: big cores first; **2**: medium cores first
836
837
838### OH_AI_ContextGetThreadNum()
839
840```
841OH_AI_API int32_t OH_AI_ContextGetThreadNum (const OH_AI_ContextHandle context)
842```
843
844**Description**
845
846Obtains the number of threads.
847
848**Since**: 9
849
850**Parameters**
851
852| Name| Description|
853| -------- | -------- |
854| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
855
856**Returns**
857
858Number of threads.
859
860
861### OH_AI_ContextSetEnableParallel()
862
863```
864OH_AI_API void OH_AI_ContextSetEnableParallel (OH_AI_ContextHandle context, bool is_parallel )
865```
866
867**Description**
868
869Sets whether to enable parallelism between operators. The setting is ineffective because the feature of this API is not yet available.
870
871**Since**: 9
872
873**Parameters**
874
875| Name| Description|
876| -------- | -------- |
877| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
878| is_parallel | Whether parallelism between operators is supported. The value **true** means that parallelism between operators is supported, and the value **false** means the opposite.|
879
880
881### OH_AI_ContextSetThreadAffinityCoreList()
882
883```
884OH_AI_API void OH_AI_ContextSetThreadAffinityCoreList (OH_AI_ContextHandle context, const int32_t * core_list, size_t core_num )
885```
886
887**Description**
888
889Sets the list of CPU cores bound to a runtime thread.
890
891For example, if **core_list** is set to **[2,6,8]**, threads run on the 2nd, 6th, and 8th CPU cores. If [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) and [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) are called for the same context object, the **core_list** parameter of [OH_AI_ContextSetThreadAffinityCoreList](#oh_ai_contextsetthreadaffinitycorelist) takes effect, but the **mode** parameter of [OH_AI_ContextSetThreadAffinityMode](#oh_ai_contextsetthreadaffinitymode) does not.
892
893**Since**: 9
894
895**Parameters**
896
897| Name| Description|
898| -------- | -------- |
899| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
900| core_list | List of bound CPU cores.|
901| core_num | Number of cores, which indicates the length of **core_list**.|
902
903
904### OH_AI_ContextSetThreadAffinityMode()
905
906```
907OH_AI_API void OH_AI_ContextSetThreadAffinityMode (OH_AI_ContextHandle context, int mode )
908```
909
910**Description**
911
912Sets the affinity mode for binding runtime threads to CPU cores, which are classified into large, medium, and small cores based on the CPU frequency. You only need to bind the large or medium cores, but not small cores.
913
914**Since**: 9
915
916**Parameters**
917
918| Name| Description|
919| -------- | -------- |
920| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
921| mode | Affinity mode. **0**: no affinities; **1**: big cores first; **2**: medium cores first|
922
923
924### OH_AI_ContextSetThreadNum()
925
926```
927OH_AI_API void OH_AI_ContextSetThreadNum (OH_AI_ContextHandle context, int32_t thread_num )
928```
929
930**Description**
931
932Sets the number of runtime threads.
933
934**Since**: 9
935
936**Parameters**
937
938| Name| Description|
939| -------- | -------- |
940| context | [OH_AI_ContextHandle](#oh_ai_contexthandle) that points to the context instance.|
941| thread_num | Number of runtime threads.|
942
943
944### OH_AI_CreateNNRTDeviceInfoByName()
945
946```
947OH_AI_API OH_AI_DeviceInfoHandle OH_AI_CreateNNRTDeviceInfoByName (const char * name)
948```
949
950**Description**
951
952Searches for the NNRt device with the specified name and creates the NNRt device information based on the information about the first found NNRt device.
953
954**Since**: 10
955
956**Parameters**
957
958| Name| Description|
959| -------- | -------- |
960| name | Name of the target NNRt device.|
961
962**Returns**
963
964[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.
965
966
967### OH_AI_CreateNNRTDeviceInfoByType()
968
969```
970OH_AI_API OH_AI_DeviceInfoHandle OH_AI_CreateNNRTDeviceInfoByType (OH_AI_NNRTDeviceType type)
971```
972
973**Description**
974
975Searches for the NNRt device with the specified type and creates the NNRt device information based on the information about the first found NNRt device.
976
977**Since**: 10
978
979**Parameters**
980
981| Name| Description|
982| -------- | -------- |
983| type | NNRt device type, which is specified by [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype).|
984
985**Returns**
986
987[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.
988
989
990### OH_AI_DestroyAllNNRTDeviceDescs()
991
992```
993OH_AI_API void OH_AI_DestroyAllNNRTDeviceDescs (NNRTDeviceDesc ** desc)
994```
995
996**Description**
997
998Destroys the NNRt device description array obtained by [OH_AI_GetAllNNRTDeviceDescs](#oh_ai_getallnnrtdevicedescs).
999
1000**Since**: 10
1001
1002**Parameters**
1003
1004| Name| Description|
1005| -------- | -------- |
1006| desc | Double pointer to the array of the NNRt device descriptions. After the operation is complete, the content pointed to by **desc** is set to **NULL**.|
1007
1008
1009### OH_AI_DeviceInfoAddExtension()
1010
1011```
1012OH_AI_API OH_AI_Status OH_AI_DeviceInfoAddExtension (OH_AI_DeviceInfoHandle device_info, const char * name, const char * value, size_t value_size )
1013```
1014
1015**Description**
1016
1017Adds extended configuration in the form of key/value pairs to the device information. This function is available only for NNRt devices.
1018
1019Note: Currently, only 11 key-value pairs are supported, including: {"CachePath": "YourCachePath"}, {"CacheVersion": "YouCacheVersion"}, {"QuantBuffer": "YourQuantBuffer"}, {"ModelName": "YourModelName"}, {"isProfiling": "YourisProfiling"}, {"opLayout": "YouropLayout"}, {"InputDims": "YourInputDims"}, {"DynamicDims": "YourDynamicDims"}, {"QuantConfigData": "YourQuantConfigData"}, {"BandMode": "YourBandMode"}, {"NPU_FM_SHARED": "YourNPU_FM_SHARED"}. Replace them as required.
1020
1021**Since**: 10
1022
1023**Parameters**
1024
1025| Name| Description|
1026| -------- | -------- |
1027| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1028| name | Key in an extended key/value pair. The value is a C string.|
1029| value |  Start address of the value in an extended key/value pair.|
1030| value_size | Length of the value in an extended key/value pair.|
1031
1032**Returns**
1033
1034Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
1035
1036
1037### OH_AI_DeviceInfoCreate()
1038
1039```
1040OH_AI_API OH_AI_DeviceInfoHandle OH_AI_DeviceInfoCreate (OH_AI_DeviceType device_type)
1041```
1042
1043**Description**
1044
1045Creates a device information object.
1046
1047**Since**: 9
1048
1049**Parameters**
1050
1051| Name| Description|
1052| -------- | -------- |
1053| device_type | Device type, which is specified by [OH_AI_DeviceType](#oh_ai_devicetype).|
1054
1055**Returns**
1056
1057[OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.
1058
1059
1060### OH_AI_DeviceInfoDestroy()
1061
1062```
1063OH_AI_API void OH_AI_DeviceInfoDestroy (OH_AI_DeviceInfoHandle * device_info)
1064```
1065
1066**Description**
1067
1068Destroys a device information object. Note: After the device information instance is added to the context, the caller does not need to destroy it manually.
1069
1070**Since**: 9
1071
1072**Parameters**
1073
1074| Name| Description|
1075| -------- | -------- |
1076| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1077
1078
1079### OH_AI_DeviceInfoGetDeviceId()
1080
1081```
1082OH_AI_API size_t OH_AI_DeviceInfoGetDeviceId (const OH_AI_DeviceInfoHandle device_info)
1083```
1084
1085**Description**
1086
1087Obtains the NNRt device ID. This function is available only for NNRt devices.
1088
1089**Since**: 10
1090
1091**Parameters**
1092
1093| Name| Description|
1094| -------- | -------- |
1095| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1096
1097**Returns**
1098
1099NNRt device ID.
1100
1101
1102### OH_AI_DeviceInfoGetDeviceType()
1103
1104```
1105OH_AI_API OH_AI_DeviceType OH_AI_DeviceInfoGetDeviceType (const OH_AI_DeviceInfoHandle device_info)
1106```
1107
1108**Description**
1109
1110Obtains the device type.
1111
1112**Since**: 9
1113
1114**Parameters**
1115
1116| Name| Description|
1117| -------- | -------- |
1118| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1119
1120**Returns**
1121
1122Type of the provider device.
1123
1124
1125### OH_AI_DeviceInfoGetEnableFP16()
1126
1127```
1128OH_AI_API bool OH_AI_DeviceInfoGetEnableFP16 (const OH_AI_DeviceInfoHandle device_info)
1129```
1130
1131**Description**
1132
1133Checks whether float16 inference is enabled. This function is available only for CPU and GPU devices.
1134
1135**Since**: 9
1136
1137**Parameters**
1138
1139| Name| Description|
1140| -------- | -------- |
1141| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1142
1143**Returns**
1144
1145Whether float16 inference is enabled.
1146
1147
1148### OH_AI_DeviceInfoGetFrequency()
1149
1150```
1151OH_AI_API int OH_AI_DeviceInfoGetFrequency (const OH_AI_DeviceInfoHandle device_info)
1152```
1153
1154**Description**
1155
1156Obtains the NPU frequency type. This function is available only for NPU devices.
1157
1158**Since**: 9
1159
1160**Parameters**
1161
1162| Name| Description|
1163| -------- | -------- |
1164| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1165
1166**Returns**
1167
1168NPU frequency type. The value ranges from **0** to **4**. **1**: low power consumption; **2**: balanced; **3**: high performance; **4**: ultra-high performance
1169
1170
1171### OH_AI_DeviceInfoGetPerformanceMode()
1172
1173```
1174OH_AI_API OH_AI_PerformanceMode OH_AI_DeviceInfoGetPerformanceMode (const OH_AI_DeviceInfoHandle device_info)
1175```
1176
1177**Description**
1178
1179Obtains the NNRt performance mode. This function is available only for NNRt devices.
1180
1181**Since**: 10
1182
1183**Parameters**
1184
1185| Name| Description|
1186| -------- | -------- |
1187| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1188
1189**Returns**
1190
1191NNRt performance mode, which is specified by [OH_AI_PerformanceMode](#oh_ai_performancemode).
1192
1193
1194### OH_AI_DeviceInfoGetPriority()
1195
1196```
1197OH_AI_API OH_AI_Priority OH_AI_DeviceInfoGetPriority (const OH_AI_DeviceInfoHandle device_info)
1198```
1199
1200**Description**
1201
1202Obtains the priority of an NNRt task. This function is available only for NNRt devices.
1203
1204**Since**: 10
1205
1206**Parameters**
1207
1208| Name| Description|
1209| -------- | -------- |
1210| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1211
1212**Returns**
1213
1214NNRt task priority, which is specified by [OH_AI_Priority](#oh_ai_priority).
1215
1216
1217### OH_AI_DeviceInfoGetProvider()
1218
1219```
1220OH_AI_API const char* OH_AI_DeviceInfoGetProvider (const OH_AI_DeviceInfoHandle device_info)
1221```
1222
1223**Description**
1224
1225Obtains the provider name.
1226
1227**Since**: 9
1228
1229**Parameters**
1230
1231| Name| Description|
1232| -------- | -------- |
1233| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1234
1235**Returns**
1236
1237Provider name.
1238
1239
1240### OH_AI_DeviceInfoGetProviderDevice()
1241
1242```
1243OH_AI_API const char* OH_AI_DeviceInfoGetProviderDevice (const OH_AI_DeviceInfoHandle device_info)
1244```
1245
1246**Description**
1247
1248Obtains the name of a provider device.
1249
1250**Since**: 9
1251
1252**Parameters**
1253
1254| Name| Description|
1255| -------- | -------- |
1256| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1257
1258**Returns**
1259
1260Name of the provider device.
1261
1262
1263### OH_AI_DeviceInfoSetDeviceId()
1264
1265```
1266OH_AI_API void OH_AI_DeviceInfoSetDeviceId (OH_AI_DeviceInfoHandle device_info, size_t device_id )
1267```
1268
1269**Description**
1270
1271Sets the NNRt device ID. This function is available only for NNRt devices.
1272
1273**Since**: 10
1274
1275**Parameters**
1276
1277| Name| Description|
1278| -------- | -------- |
1279| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1280| device_id | NNRt device ID.|
1281
1282
1283### OH_AI_DeviceInfoSetEnableFP16()
1284
1285```
1286OH_AI_API void OH_AI_DeviceInfoSetEnableFP16 (OH_AI_DeviceInfoHandle device_info, bool is_fp16 )
1287```
1288
1289**Description**
1290
1291Sets whether to enable float16 inference. This function is available only for CPU and GPU devices.
1292
1293**Since**: 9
1294
1295**Parameters**
1296
1297| Name| Description|
1298| -------- | -------- |
1299| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1300| is_fp16 | Whether to enable the float16 inference mode.|
1301
1302
1303### OH_AI_DeviceInfoSetFrequency()
1304
1305```
1306OH_AI_API void OH_AI_DeviceInfoSetFrequency (OH_AI_DeviceInfoHandle device_info, int frequency )
1307```
1308
1309**Description**
1310
1311Sets the NPU frequency type. This function is available only for NPU devices.
1312
1313**Since**: 9
1314
1315**Parameters**
1316
1317| Name| Description|
1318| -------- | -------- |
1319| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1320| frequency | NPU frequency type. The value ranges from **0** to **4**. The default value is **3**. **1**: low power consumption; **2**: balanced; **3**: high performance; **4**: ultra-high performance|
1321
1322
1323### OH_AI_DeviceInfoSetPerformanceMode()
1324
1325```
1326OH_AI_API void OH_AI_DeviceInfoSetPerformanceMode (OH_AI_DeviceInfoHandle device_info, OH_AI_PerformanceMode mode )
1327```
1328
1329**Description**
1330
1331Sets the NNRt performance mode. This function is available only for NNRt devices.
1332
1333**Since**: 10
1334
1335**Parameters**
1336
1337| Name| Description|
1338| -------- | -------- |
1339| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1340| mode | NNRt performance mode, which is specified by [OH_AI_PerformanceMode](#oh_ai_performancemode).|
1341
1342
1343### OH_AI_DeviceInfoSetPriority()
1344
1345```
1346OH_AI_API void OH_AI_DeviceInfoSetPriority (OH_AI_DeviceInfoHandle device_info, OH_AI_Priority priority )
1347```
1348
1349**Description**
1350
1351Sets the priority of an NNRt task. This function is available only for NNRt devices.
1352
1353**Since**: 10
1354
1355**Parameters**
1356
1357| Name| Description|
1358| -------- | -------- |
1359| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1360| priority | NNRt task priority, which is specified by [OH_AI_Priority](#oh_ai_priority).|
1361
1362
1363### OH_AI_DeviceInfoSetProvider()
1364
1365```
1366OH_AI_API void OH_AI_DeviceInfoSetProvider (OH_AI_DeviceInfoHandle device_info, const char * provider )
1367```
1368
1369**Description**
1370
1371Sets the name of the provider.
1372
1373**Since**: 9
1374
1375**Parameters**
1376
1377| Name| Description|
1378| -------- | -------- |
1379| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1380| provider | Provider name.|
1381
1382
1383### OH_AI_DeviceInfoSetProviderDevice()
1384
1385```
1386OH_AI_API void OH_AI_DeviceInfoSetProviderDevice (OH_AI_DeviceInfoHandle device_info, const char * device )
1387```
1388
1389**Description**
1390
1391Sets the name of a provider device.
1392
1393**Since**: 9
1394
1395**Parameters**
1396
1397| Name| Description|
1398| -------- | -------- |
1399| device_info | [OH_AI_DeviceInfoHandle](#oh_ai_deviceinfohandle) that points to a device information instance.|
1400| device | Name of the provider device, for example, CPU.|
1401
1402
1403### OH_AI_ExportModel()
1404
1405```
1406OH_AI_API OH_AI_Status OH_AI_ExportModel (OH_AI_ModelHandle model, OH_AI_ModelType model_type, const char * model_file, OH_AI_QuantizationType quantization_type, bool export_inference_only, char ** output_tensor_name, size_t num )
1407```
1408
1409**Description**
1410
1411Exports a training model. This API is used only for on-device training.
1412
1413**Since**: 11
1414
1415**Parameters**
1416
1417| Name| Description|
1418| -------- | -------- |
1419| model | Pointer to the model object.|
1420| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
1421| model_file | Path of the exported model file.|
1422| quantization_type | Quantization type.|
1423| export_inference_only | Whether to export an inference model.|
1424| output_tensor_name | Output tensor of the exported model. This parameter is left blank by default, which indicates full export.|
1425| num | Number of output tensors.|
1426
1427**Returns**
1428
1429Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
1430
1431### OH_AI_ExportModelBuffer()
1432
1433```
1434OH_AI_API OH_AI_Status OH_AI_ExportModelBuffer (OH_AI_ModelHandle model, OH_AI_ModelType model_type, void * model_data, size_t * data_size, OH_AI_QuantizationType quantization_type, bool export_inference_only, char ** output_tensor_name, size_t num )
1435```
1436**Description**
1437Exports the memory cache of the training model. This API is used only for on-device training.
1438
1439**Since**: 11
1440
1441**Parameters**
1442
1443| Name| Description|
1444| -------- | -------- |
1445| model | Pointer to the model object. |
1446| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype). |
1447| model_data | Pointer to the buffer that stores the exported model file. |
1448| data_size | Buffer size. |
1449| quantization_type | Quantization type. |
1450| export_inference_only | Whether to export an inference model. |
1451| output_tensor_name | Output tensor of the exported model. This parameter is left blank by default, which indicates full export. |
1452| num | Number of output tensors. |
1453
1454**Returns**
1455
1456Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
1457
1458
1459### OH_AI_ExportWeightsCollaborateWithMicro()
1460
1461```
1462OH_AI_API OH_AI_Status OH_AI_ExportWeightsCollaborateWithMicro (OH_AI_ModelHandle model, OH_AI_ModelType model_type, const char * weight_file, bool is_inference, bool enable_fp16, char ** changeable_weights_name, size_t num )
1463```
1464
1465**Description**
1466
1467Exports the weight file of the training model for micro inference. This API is used only for on-device training.
1468
1469**Since**: 11
1470
1471**Parameters**
1472
1473| Name| Description|
1474| -------- | -------- |
1475| model | Pointer to the model object.|
1476| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
1477| weight_file | Path of the exported weight file.|
1478| is_inference | Whether to export inference models. Currently, this parameter can only be set to **true**.|
1479| enable_fp16 | Whether to save floating-point weights in float16 format.|
1480| changeable_weights_name | Name of the weight tensor with a variable shape.|
1481| num | Number of weight tensors with a variable shape.|
1482
1483**Returns**
1484
1485Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
1486
1487
1488### OH_AI_GetAllNNRTDeviceDescs()
1489
1490```
1491OH_AI_API NNRTDeviceDesc* OH_AI_GetAllNNRTDeviceDescs (size_t * num)
1492```
1493
1494**Description**
1495
1496Obtains the descriptions of all NNRt devices in the system.
1497
1498**Since**: 10
1499
1500**Parameters**
1501
1502| Name| Description|
1503| -------- | -------- |
1504| num | Number of NNRt devices.|
1505
1506**Returns**
1507
1508Pointer to the array of the NNRt device descriptions. If the operation fails, **NULL** is returned.
1509
1510
1511### OH_AI_GetDeviceIdFromNNRTDeviceDesc()
1512
1513```
1514OH_AI_API size_t OH_AI_GetDeviceIdFromNNRTDeviceDesc (const NNRTDeviceDesc * desc)
1515```
1516
1517**Description**
1518
1519Obtains the NNRt device ID from the specified NNRt device description. Note that this ID is valid only for NNRt devices.
1520
1521**Since**: 10
1522
1523**Parameters**
1524
1525| Name| Description|
1526| -------- | -------- |
1527| desc | Pointer to the NNRt device description.|
1528
1529**Returns**
1530
1531NNRt device ID.
1532
1533
1534### OH_AI_GetElementOfNNRTDeviceDescs()
1535
1536```
1537OH_AI_API NNRTDeviceDesc* OH_AI_GetElementOfNNRTDeviceDescs (NNRTDeviceDesc * descs, size_t index )
1538```
1539
1540**Description**
1541
1542Obtains the pointer to an element in the NNRt device description array.
1543
1544**Since**: 10
1545
1546**Parameters**
1547
1548| Name| Description|
1549| -------- | -------- |
1550| descs | NNRt device description array.|
1551| index | Index of an array element.|
1552
1553**Returns**
1554
1555Pointer to an element in the NNRt device description array.
1556
1557
1558### OH_AI_GetNameFromNNRTDeviceDesc()
1559
1560```
1561OH_AI_API const char* OH_AI_GetNameFromNNRTDeviceDesc (const NNRTDeviceDesc * desc)
1562```
1563
1564**Description**
1565
1566Obtains the NNRt device name from the specified NNRt device description.
1567
1568**Since**: 10
1569
1570**Parameters**
1571
1572| Name| Description|
1573| -------- | -------- |
1574| desc | Pointer to the NNRt device description.|
1575
1576**Returns**
1577
1578NNRt device name. The value is a pointer that points to a constant string, which is held by **desc**. The caller does not need to destroy it separately.
1579
1580
1581### OH_AI_GetTypeFromNNRTDeviceDesc()
1582
1583```
1584OH_AI_API OH_AI_NNRTDeviceType OH_AI_GetTypeFromNNRTDeviceDesc (const NNRTDeviceDesc * desc)
1585```
1586
1587**Description**
1588
1589Obtains the NNRt device type from the specified NNRt device description.
1590
1591**Since**: 10
1592
1593**Parameters**
1594
1595| Name| Description|
1596| -------- | -------- |
1597| desc | Pointer to the NNRt device description.|
1598
1599**Returns**
1600
1601NNRt device type, which is specified by [OH_AI_NNRTDeviceType](#oh_ai_nnrtdevicetype).
1602
1603
1604### OH_AI_ModelBuild()
1605
1606```
1607OH_AI_API OH_AI_Status OH_AI_ModelBuild (OH_AI_ModelHandle model, const void * model_data, size_t data_size, OH_AI_ModelType model_type, const OH_AI_ContextHandle model_context )
1608```
1609
1610**Description**
1611
1612Loads and builds a MindSpore model from the memory buffer.
1613
1614Note that the same {\@Link OH_AI_ContextHandle} object can only be passed to {\@Link OH_AI_ModelBuild} or {\@Link OH_AI_ModelBuildFromFile} once. If you call this function multiple times, make sure that you create multiple {\@Link OH_AI_ContextHandle} objects accordingly.
1615
1616**Since**: 9
1617
1618**Parameters**
1619
1620| Name| Description|
1621| -------- | -------- |
1622| model | Pointer to the model object.|
1623| model_data | Address of the loaded model data in the memory.|
1624| data_size | Length of the model data.|
1625| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
1626| model_context | Model runtime context, which is specified by [OH_AI_ContextHandle](#oh_ai_contexthandle).|
1627
1628**Returns**
1629
1630Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
1631
1632
1633### OH_AI_ModelBuildFromFile()
1634
1635```
1636OH_AI_API OH_AI_Status OH_AI_ModelBuildFromFile (OH_AI_ModelHandle model, const char * model_path, OH_AI_ModelType model_type, const OH_AI_ContextHandle model_context )
1637```
1638
1639**Description**
1640
1641Loads and builds a MindSpore model from a model file.
1642
1643Note that the same {\@Link OH_AI_ContextHandle} object can only be passed to {\@Link OH_AI_ModelBuild} or {\@Link OH_AI_ModelBuildFromFile} once. If you call this function multiple times, make sure that you create multiple {\@Link OH_AI_ContextHandle} objects accordingly.
1644
1645**Since**: 9
1646
1647**Parameters**
1648
1649| Name| Description|
1650| -------- | -------- |
1651| model | Pointer to the model object.|
1652| model_path | Path of the model file.|
1653| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
1654| model_context | Model runtime context, which is specified by [OH_AI_ContextHandle](#oh_ai_contexthandle).|
1655
1656**Returns**
1657
1658Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
1659
1660
1661### OH_AI_ModelCreate()
1662
1663```
1664OH_AI_API OH_AI_ModelHandle OH_AI_ModelCreate (void)
1665```
1666
1667**Description**
1668
1669Creates a model object.
1670
1671**Since**: 9
1672
1673**Returns**
1674
1675Pointer to the model object.
1676
1677
1678### OH_AI_ModelDestroy()
1679
1680```
1681OH_AI_API void OH_AI_ModelDestroy (OH_AI_ModelHandle * model)
1682```
1683
1684**Description**
1685
1686Destroys a model object.
1687
1688**Since**: 9
1689
1690**Parameters**
1691
1692| Name| Description|
1693| -------- | -------- |
1694| model | Pointer to the model object.|
1695
1696
1697### OH_AI_ModelGetInputByTensorName()
1698
1699```
1700OH_AI_API OH_AI_TensorHandle OH_AI_ModelGetInputByTensorName (const OH_AI_ModelHandle model, const char * tensor_name )
1701```
1702
1703**Description**
1704
1705Obtains the input tensor of a model by tensor name.
1706
1707**Since**: 9
1708
1709**Parameters**
1710
1711| Name| Description|
1712| -------- | -------- |
1713| model | Pointer to the model object.|
1714| tensor_name | Tensor name.|
1715
1716**Returns**
1717
1718Pointer to the input tensor indicated by **tensor_name**. If the tensor does not exist in the input, **null** will be returned.
1719
1720
1721### OH_AI_ModelGetInputs()
1722
1723```
1724OH_AI_API OH_AI_TensorHandleArray OH_AI_ModelGetInputs (const OH_AI_ModelHandle model)
1725```
1726
1727**Description**
1728
1729Obtains the input tensor array structure of a model.
1730
1731**Since**: 9
1732
1733**Parameters**
1734
1735| Name| Description|
1736| -------- | -------- |
1737| model | Pointer to the model object.|
1738
1739**Returns**
1740
1741Tensor array structure corresponding to the model input.
1742
1743
1744### OH_AI_ModelGetLearningRate()
1745
1746```
1747OH_AI_API float OH_AI_ModelGetLearningRate (OH_AI_ModelHandle model)
1748```
1749
1750**Description**
1751
1752Obtains the learning rate for model training. This API is used only for on-device training.
1753
1754**Since**: 11
1755
1756**Parameters**
1757
1758| Name| Description|
1759| -------- | -------- |
1760| model | Pointer to the model object.|
1761
1762**Returns**
1763
1764Learning rate. If no optimizer is set, the value is **0.0**.
1765
1766
1767### OH_AI_ModelGetOutputByTensorName()
1768
1769```
1770OH_AI_API OH_AI_TensorHandle OH_AI_ModelGetOutputByTensorName (const OH_AI_ModelHandle model, const char * tensor_name )
1771```
1772
1773**Description**
1774
1775Obtains the output tensor of a model by tensor name.
1776
1777**Since**: 9
1778
1779**Parameters**
1780
1781| Name| Description|
1782| -------- | -------- |
1783| model | Pointer to the model object.|
1784| tensor_name | Tensor name.|
1785
1786**Returns**
1787
1788Pointer to the input tensor indicated by **tensor_name**. If the tensor does not exist, **null** will be returned.
1789
1790
1791### OH_AI_ModelGetOutputs()
1792
1793```
1794OH_AI_API OH_AI_TensorHandleArray OH_AI_ModelGetOutputs (const OH_AI_ModelHandle model)
1795```
1796
1797**Description**
1798
1799Obtains the output tensor array structure of a model.
1800
1801**Since**: 9
1802
1803**Parameters**
1804
1805| Name| Description|
1806| -------- | -------- |
1807| model | Pointer to the model object.|
1808
1809**Returns**
1810
1811Tensor array structure corresponding to the model output.
1812
1813
1814### OH_AI_ModelGetTrainMode()
1815
1816```
1817OH_AI_API bool OH_AI_ModelGetTrainMode (OH_AI_ModelHandle model)
1818```
1819
1820**Description**
1821
1822Obtains the training mode.
1823
1824**Since**: 11
1825
1826**Parameters**
1827
1828| Name| Description|
1829| -------- | -------- |
1830| model | Pointer to the model object.|
1831
1832**Returns**
1833
1834Whether the training mode is used.
1835
1836
1837### OH_AI_ModelGetWeights()
1838
1839```
1840OH_AI_API OH_AI_TensorHandleArray OH_AI_ModelGetWeights (OH_AI_ModelHandle model)
1841```
1842
1843**Description**
1844
1845Obtains all weight tensors of a model. This API is used only for on-device training.
1846
1847**Since**: 11
1848
1849**Parameters**
1850
1851| Name| Description|
1852| -------- | -------- |
1853| model | Pointer to the model object.|
1854
1855**Returns**
1856
1857All weight tensors of the model.
1858
1859
1860### OH_AI_ModelPredict()
1861
1862```
1863OH_AI_API OH_AI_Status OH_AI_ModelPredict (OH_AI_ModelHandle model, const OH_AI_TensorHandleArray inputs, OH_AI_TensorHandleArray * outputs, const OH_AI_KernelCallBack before, const OH_AI_KernelCallBack after )
1864```
1865
1866**Description**
1867
1868Performs model inference.
1869
1870**Since**: 9
1871
1872**Parameters**
1873
1874| Name| Description|
1875| -------- | -------- |
1876| model | Pointer to the model object.|
1877| inputs | Tensor array structure corresponding to the model input.|
1878| outputs | Pointer to the tensor array structure corresponding to the model output.|
1879| before | Callback function executed before model inference.|
1880| after | Callback function executed after model inference.|
1881
1882**Returns**
1883
1884Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
1885
1886
1887### OH_AI_ModelResize()
1888
1889```
1890OH_AI_API OH_AI_Status OH_AI_ModelResize (OH_AI_ModelHandle model, const OH_AI_TensorHandleArray inputs, OH_AI_ShapeInfo * shape_infos, size_t shape_info_num )
1891```
1892
1893**Description**
1894
1895Adjusts the input tensor shapes of a built model.
1896
1897**Since**: 9
1898
1899**Parameters**
1900
1901| Name| Description|
1902| -------- | -------- |
1903| model | Pointer to the model object.|
1904| inputs | Tensor array structure corresponding to the model input.|
1905| shape_infos | Input shape information array, which consists of tensor shapes arranged in the model input sequence. The model adjusts the tensor shapes in sequence.|
1906| shape_info_num | Length of the shape information array.|
1907
1908**Returns**
1909
1910Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
1911
1912
1913### OH_AI_ModelSetLearningRate()
1914
1915```
1916OH_AI_API OH_AI_Status OH_AI_ModelSetLearningRate (OH_AI_ModelHandle model, float learning_rate )
1917```
1918
1919**Description**
1920
1921Sets the learning rate for model training. This API is used only for on-device training.
1922
1923**Since**: 11
1924
1925**Parameters**
1926
1927| Name| Description|
1928| -------- | -------- |
1929| model | Pointer to the model object.|
1930| learning_rate | Learning rate.|
1931
1932**Returns**
1933
1934Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
1935
1936
1937### OH_AI_ModelSetTrainMode()
1938
1939```
1940OH_AI_API OH_AI_Status OH_AI_ModelSetTrainMode (OH_AI_ModelHandle model, bool train )
1941```
1942
1943**Description**
1944
1945Sets the training mode. This API is used only for on-device training.
1946
1947**Since**: 11
1948
1949**Parameters**
1950
1951| Name| Description|
1952| -------- | -------- |
1953| model | Pointer to the model object.|
1954| train | Whether the training mode is used.|
1955
1956**Returns**
1957
1958Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
1959
1960
1961### OH_AI_ModelSetupVirtualBatch()
1962
1963```
1964OH_AI_API OH_AI_Status OH_AI_ModelSetupVirtualBatch (OH_AI_ModelHandle model, int virtual_batch_multiplier, float lr, float momentum )
1965```
1966
1967**Description**
1968
1969Sets the virtual batch for training. This API is used only for on-device training.
1970
1971**Since**: 11
1972
1973**Parameters**
1974
1975| Name| Description|
1976| -------- | -------- |
1977| model | Pointer to the model object.|
1978| virtual_batch_multiplier | Virtual batch multiplier. If the value is less than **1**, the virtual batch is disabled.|
1979| lr | Learning rate. The default value is **-1.0f**.|
1980| momentum | Momentum. The default value is **-1.0f**.|
1981
1982**Returns**
1983
1984Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
1985
1986
1987### OH_AI_ModelUpdateWeights()
1988
1989```
1990OH_AI_API OH_AI_Status OH_AI_ModelUpdateWeights (OH_AI_ModelHandle model, const OH_AI_TensorHandleArray new_weights )
1991```
1992
1993**Description**
1994
1995Updates the weight tensors of a model. This API is used only for on-device training.
1996
1997**Since**: 11
1998
1999**Parameters**
2000
2001| Name| Description|
2002| -------- | -------- |
2003| model | Pointer to the model object.|
2004| new_weights | Weight tensors to be updated.|
2005
2006**Returns**
2007
2008Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
2009
2010
2011### OH_AI_RunStep()
2012
2013```
2014OH_AI_API OH_AI_Status OH_AI_RunStep (OH_AI_ModelHandle model, const OH_AI_KernelCallBack before, const OH_AI_KernelCallBack after )
2015```
2016
2017**Description**
2018
2019Defines a single-step training model. This API is used only for on-device training.
2020
2021**Since**: 11
2022
2023**Parameters**
2024
2025| Name| Description|
2026| -------- | -------- |
2027| model | Pointer to the model object.|
2028| before | Callback function executed before model inference.|
2029| after | Callback function executed after model inference.|
2030
2031**Returns**
2032
2033Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
2034
2035
2036### OH_AI_TensorClone()
2037
2038```
2039OH_AI_API OH_AI_TensorHandle OH_AI_TensorClone (OH_AI_TensorHandle tensor)
2040```
2041
2042**Description**
2043
2044Clones a tensor.
2045
2046**Since**: 9
2047
2048**Parameters**
2049
2050| Name| Description|
2051| -------- | -------- |
2052| tensor | Pointer to the tensor to clone.|
2053
2054**Returns**
2055
2056Handle of a tensor object.
2057
2058
2059### OH_AI_TensorCreate()
2060
2061```
2062OH_AI_API OH_AI_TensorHandle OH_AI_TensorCreate (const char * name, OH_AI_DataType type, const int64_t * shape, size_t shape_num, const void * data, size_t data_len )
2063```
2064
2065**Description**
2066
2067Creates a tensor object.
2068
2069**Since**: 9
2070
2071**Parameters**
2072
2073| Name| Description|
2074| -------- | -------- |
2075| name | Tensor name.|
2076| type | Tensor data type.|
2077| shape | Tensor dimension array.|
2078| shape_num | Length of the tensor dimension array.|
2079| data | Data pointer.|
2080| data_len | Data length.|
2081
2082**Returns**
2083
2084Handle of a tensor object.
2085
2086### OH_AI_TensorGetAllocator()
2087
2088```
2089OH_AI_API OH_AI_AllocatorHandle OH_AI_TensorGetAllocator(OH_AI_TensorHandle tensor)
2090```
2091
2092**Description**
2093
2094Obtains a memory allocator. The allocator is responsible for allocating memory for tensors.
2095
2096**Since**: 12
2097
2098**Parameters**
2099
2100| Name| Description|
2101| -------- | -------- |
2102| tensor | Handle of the tensor object.|
2103
2104**Returns**
2105
2106Handle of the memory allocator.
2107
2108
2109### OH_AI_TensorDestroy()
2110
2111```
2112OH_AI_API void OH_AI_TensorDestroy (OH_AI_TensorHandle * tensor)
2113```
2114
2115**Description**
2116
2117Destroys a tensor object.
2118
2119**Since**: 9
2120
2121**Parameters**
2122
2123| Name| Description|
2124| -------- | -------- |
2125| tensor | Level-2 pointer to the tensor handle.|
2126
2127
2128### OH_AI_TensorGetData()
2129
2130```
2131OH_AI_API const void* OH_AI_TensorGetData (const OH_AI_TensorHandle tensor)
2132```
2133
2134**Description**
2135
2136Obtains the pointer to tensor data.
2137
2138**Since**: 9
2139
2140**Parameters**
2141
2142| Name| Description|
2143| -------- | -------- |
2144| tensor | Handle of the tensor object.|
2145
2146**Returns**
2147
2148Pointer to tensor data.
2149
2150
2151### OH_AI_TensorGetDataSize()
2152
2153```
2154OH_AI_API size_t OH_AI_TensorGetDataSize (const OH_AI_TensorHandle tensor)
2155```
2156
2157**Description**
2158
2159Obtains the number of bytes of the tensor data.
2160
2161**Since**: 9
2162
2163**Parameters**
2164
2165| Name| Description|
2166| -------- | -------- |
2167| tensor | Handle of the tensor object.|
2168
2169**Returns**
2170
2171Number of bytes of the tensor data.
2172
2173
2174### OH_AI_TensorGetDataType()
2175
2176```
2177OH_AI_API OH_AI_DataType OH_AI_TensorGetDataType (const OH_AI_TensorHandle tensor)
2178```
2179
2180**Description**
2181
2182Obtains the tensor type.
2183
2184**Since**: 9
2185
2186**Parameters**
2187
2188| Name| Description|
2189| -------- | -------- |
2190| tensor | Handle of the tensor object.|
2191
2192**Returns**
2193
2194Tensor data type.
2195
2196
2197### OH_AI_TensorGetElementNum()
2198
2199```
2200OH_AI_API int64_t OH_AI_TensorGetElementNum (const OH_AI_TensorHandle tensor)
2201```
2202
2203**Description**
2204
2205Obtains the number of tensor elements.
2206
2207**Since**: 9
2208
2209**Parameters**
2210
2211| Name| Description|
2212| -------- | -------- |
2213| tensor | Handle of the tensor object.|
2214
2215**Returns**
2216
2217Number of tensor elements.
2218
2219
2220### OH_AI_TensorGetFormat()
2221
2222```
2223OH_AI_API OH_AI_Format OH_AI_TensorGetFormat (const OH_AI_TensorHandle tensor)
2224```
2225
2226**Description**
2227
2228Obtains the tensor data format.
2229
2230**Since**: 9
2231
2232**Parameters**
2233
2234| Name| Description|
2235| -------- | -------- |
2236| tensor | Handle of the tensor object.|
2237
2238**Returns**
2239
2240Tensor data format.
2241
2242
2243### OH_AI_TensorGetMutableData()
2244
2245```
2246OH_AI_API void* OH_AI_TensorGetMutableData (const OH_AI_TensorHandle tensor)
2247```
2248
2249**Description**
2250
2251Obtains the pointer to variable tensor data. If the data is empty, memory will be allocated.
2252
2253**Since**: 9
2254
2255**Parameters**
2256
2257| Name| Description|
2258| -------- | -------- |
2259| tensor | Handle of the tensor object.|
2260
2261**Returns**
2262
2263Pointer to tensor data.
2264
2265
2266### OH_AI_TensorGetName()
2267
2268```
2269OH_AI_API const char* OH_AI_TensorGetName (const OH_AI_TensorHandle tensor)
2270```
2271
2272**Description**
2273
2274Obtains the tensor name.
2275
2276**Since**: 9
2277
2278**Parameters**
2279
2280| Name| Description|
2281| -------- | -------- |
2282| tensor | Handle of the tensor object.|
2283
2284**Returns**
2285
2286Tensor name.
2287
2288
2289### OH_AI_TensorGetShape()
2290
2291```
2292OH_AI_API const int64_t* OH_AI_TensorGetShape (const OH_AI_TensorHandle tensor, size_t * shape_num )
2293```
2294
2295**Description**
2296
2297Obtains the tensor shape.
2298
2299**Since**: 9
2300
2301**Parameters**
2302
2303| Name| Description|
2304| -------- | -------- |
2305| tensor | Handle of the tensor object.|
2306| shape_num | Length of the tensor shape array.|
2307
2308**Returns**
2309
2310Shape array.
2311
2312### OH_AI_TensorSetAllocator()
2313
2314```
2315OH_AI_API OH_AI_Status OH_AI_TensorSetAllocator(OH_AI_TensorHandle tensor, OH_AI_AllocatorHandle allocator)
2316```
2317
2318**Description**
2319
2320Sets the memory allocator. The allocator is responsible for allocating memory for tensors.
2321
2322**Since**: 12
2323
2324**Parameters**
2325
2326| Name     | Description                |
2327| --------- | -------------------- |
2328| tensor    | Handle of the tensor object.      |
2329| allocator | Handle of the memory allocator.|
2330
2331**Returns**
2332
2333Execution status code. The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
2334
2335
2336### OH_AI_TensorSetData()
2337
2338```
2339OH_AI_API void OH_AI_TensorSetData (OH_AI_TensorHandle tensor, void * data )
2340```
2341
2342**Description**
2343
2344Sets the tensor data.
2345
2346**Since**: 9
2347
2348**Parameters**
2349
2350| Name| Description|
2351| -------- | -------- |
2352| tensor | Handle of the tensor object.|
2353| data | Data pointer.|
2354
2355
2356### OH_AI_TensorSetDataType()
2357
2358```
2359OH_AI_API void OH_AI_TensorSetDataType (OH_AI_TensorHandle tensor, OH_AI_DataType type )
2360```
2361
2362**Description**
2363
2364Sets the data type of a tensor.
2365
2366**Since**: 9
2367
2368**Parameters**
2369
2370| Name| Description|
2371| -------- | -------- |
2372| tensor | Handle of the tensor object.|
2373| type | Data type, which is specified by [OH_AI_DataType](#oh_ai_datatype).|
2374
2375
2376### OH_AI_TensorSetFormat()
2377
2378```
2379OH_AI_API void OH_AI_TensorSetFormat (OH_AI_TensorHandle tensor, OH_AI_Format format )
2380```
2381
2382**Description**
2383
2384Sets the tensor data format.
2385
2386**Since**: 9
2387
2388**Parameters**
2389
2390| Name| Description|
2391| -------- | -------- |
2392| tensor | Handle of the tensor object.|
2393| format | Tensor data format.|
2394
2395
2396### OH_AI_TensorSetName()
2397
2398```
2399OH_AI_API void OH_AI_TensorSetName (OH_AI_TensorHandle tensor, const char * name )
2400```
2401
2402**Description**
2403
2404Sets the tensor name.
2405
2406**Since**: 9
2407
2408**Parameters**
2409
2410| Name| Description|
2411| -------- | -------- |
2412| tensor | Handle of the tensor object.|
2413| name | Tensor name.|
2414
2415
2416### OH_AI_TensorSetShape()
2417
2418```
2419OH_AI_API void OH_AI_TensorSetShape (OH_AI_TensorHandle tensor, const int64_t * shape, size_t shape_num )
2420```
2421
2422**Description**
2423
2424Sets the tensor shape.
2425
2426**Since**: 9
2427
2428**Parameters**
2429
2430| Name| Description|
2431| -------- | -------- |
2432| tensor | Handle of the tensor object.|
2433| shape | Shape array.|
2434| shape_num | Length of the tensor shape array.|
2435
2436
2437### OH_AI_TensorSetUserData()
2438
2439```
2440OH_AI_API OH_AI_Status OH_AI_TensorSetUserData (OH_AI_TensorHandle tensor, void * data, size_t data_size )
2441```
2442
2443**Description**
2444
2445Sets the tensor as the user data. This function allows you to reuse user data as the model input, which helps to reduce data copy by one time. > **NOTE**<br>The user data is type of external data for the tensor and is not automatically released when the tensor is destroyed. The caller needs to release the data separately. In addition, the caller must ensure that the user data is valid during use of the tensor.
2446
2447**Since**: 10
2448
2449**Parameters**
2450
2451| Name| Description|
2452| -------- | -------- |
2453| tensor | Handle of the tensor object.|
2454| data | Start address of user data.|
2455| data_size | Length of the user data length.|
2456
2457**Returns**
2458
2459Execution status code. The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
2460
2461
2462### OH_AI_TrainCfgCreate()
2463
2464```
2465OH_AI_API OH_AI_TrainCfgHandle OH_AI_TrainCfgCreate ()
2466```
2467
2468**Description**
2469
2470Creates the pointer to the training configuration object. This API is used only for on-device training.
2471
2472**Since**: 11
2473
2474**Returns**
2475
2476Pointer to the training configuration object.
2477
2478
2479### OH_AI_TrainCfgDestroy()
2480
2481```
2482OH_AI_API void OH_AI_TrainCfgDestroy (OH_AI_TrainCfgHandle * train_cfg)
2483```
2484
2485**Description**
2486
2487Destroys the pointer to the training configuration object. This API is used only for on-device training.
2488
2489**Since**: 11
2490
2491**Parameters**
2492
2493| Name| Description|
2494| -------- | -------- |
2495| train_cfg | Pointer to the training configuration object.|
2496
2497
2498### OH_AI_TrainCfgGetLossName()
2499
2500```
2501OH_AI_API char** OH_AI_TrainCfgGetLossName (OH_AI_TrainCfgHandle train_cfg, size_t * num )
2502```
2503
2504**Description**
2505
2506Obtains the list of loss functions, which are used only for on-device training.
2507
2508**Since**: 11
2509
2510**Parameters**
2511
2512| Name| Description|
2513| -------- | -------- |
2514| train_cfg | Pointer to the training configuration object.|
2515| num | Number of loss functions.|
2516
2517**Returns**
2518
2519List of loss functions.
2520
2521
2522### OH_AI_TrainCfgGetOptimizationLevel()
2523
2524```
2525OH_AI_API OH_AI_OptimizationLevel OH_AI_TrainCfgGetOptimizationLevel (OH_AI_TrainCfgHandle train_cfg)
2526```
2527
2528**Description**
2529
2530Obtains the optimization level of the training configuration object. This API is used only for on-device training.
2531
2532**Since**: 11
2533
2534**Parameters**
2535
2536| Name| Description|
2537| -------- | -------- |
2538| train_cfg | Pointer to the training configuration object.|
2539
2540**Returns**
2541
2542Optimization level.
2543
2544
2545### OH_AI_TrainCfgSetLossName()
2546
2547```
2548OH_AI_API void OH_AI_TrainCfgSetLossName (OH_AI_TrainCfgHandle train_cfg, const char ** loss_name, size_t num )
2549```
2550
2551**Description**
2552
2553Sets the list of loss functions, which are used only for on-device training.
2554
2555**Since**: 11
2556
2557**Parameters**
2558
2559| Name| Description|
2560| -------- | -------- |
2561| train_cfg | Pointer to the training configuration object.|
2562| loss_name | List of loss functions.|
2563| num | Number of loss functions.|
2564
2565
2566### OH_AI_TrainCfgSetOptimizationLevel()
2567
2568```
2569OH_AI_API void OH_AI_TrainCfgSetOptimizationLevel (OH_AI_TrainCfgHandle train_cfg, OH_AI_OptimizationLevel level )
2570```
2571
2572**Description**
2573
2574Sets the optimization level of the training configuration object. This API is used only for on-device training.
2575
2576**Since**: 11
2577
2578**Parameters**
2579
2580| Name| Description|
2581| -------- | -------- |
2582| train_cfg | Pointer to the training configuration object.|
2583| level | Optimization level.|
2584
2585
2586### OH_AI_TrainModelBuild()
2587
2588```
2589OH_AI_API OH_AI_Status OH_AI_TrainModelBuild (OH_AI_ModelHandle model, const void * model_data, size_t data_size, OH_AI_ModelType model_type, const OH_AI_ContextHandle model_context, const OH_AI_TrainCfgHandle train_cfg )
2590```
2591
2592**Description**
2593
2594Loads a training model from the memory buffer and compiles the model to a state ready for running on the device. This API is used only for on-device training.
2595
2596**Since**: 11
2597
2598**Parameters**
2599
2600| Name| Description|
2601| -------- | -------- |
2602| model | Pointer to the model object.|
2603| model_data | Pointer to the buffer for storing the model file to be read.|
2604| data_size | Buffer size.|
2605| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
2606| model_context | Model runtime context, which is specified by [OH_AI_ContextHandle](#oh_ai_contexthandle).|
2607| train_cfg | Pointer to the training configuration object.|
2608
2609**Returns**
2610
2611Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
2612
2613
2614### OH_AI_TrainModelBuildFromFile()
2615
2616```
2617OH_AI_API OH_AI_Status OH_AI_TrainModelBuildFromFile (OH_AI_ModelHandle model, const char * model_path, OH_AI_ModelType model_type, const OH_AI_ContextHandle model_context, const OH_AI_TrainCfgHandle train_cfg )
2618```
2619
2620**Description**
2621
2622Loads the training model from the specified path and compiles the model to a state ready for running on the device. This API is used only for on-device training.
2623
2624**Since**: 11
2625
2626**Parameters**
2627
2628| Name| Description|
2629| -------- | -------- |
2630| model | Pointer to the model object.|
2631| model_path | Path of the model file.|
2632| model_type | Model file type, which is specified by [OH_AI_ModelType](#oh_ai_modeltype).|
2633| model_context | Model runtime context, which is specified by [OH_AI_ContextHandle](#oh_ai_contexthandle).|
2634| train_cfg | Pointer to the training configuration object.|
2635
2636**Returns**
2637
2638Status code enumerated by [OH_AI_Status](#oh_ai_status). The value **OH_AI_STATUS_SUCCESS** indicates that the operation is successful. If the operation fails, an error code is returned.
2639