• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# neural_network_core.h
2<!--Kit: Neural Network Runtime Kit-->
3<!--Subsystem: AI-->
4<!--Owner: @GbuzhidaoR-->
5<!--Designer: @GbuzhidaoR-->
6<!--Tester: @GbuzhidaoR-->
7<!--Adviser: @ge-yafang-->
8## Overview
9
10Defines APIs for the Neural Network Core module. The AI inference framework uses the native APIs provided by Neural Network Core to compile models and perform inference and computing on acceleration hardware.
11
12Some API definitions have been moved from **neural_network_runtime.h** to this file for unified presentation. These APIs are supported since API version 11 and can be used in all later versions.
13
14Currently, the APIs of Neural Network Core do not support multi-thread calling.
15
16**File to include**: <neural_network_runtime/neural_network_core.h>
17
18**Library**: libneural_network_core.so
19
20**System capability**: SystemCapability.Ai.NeuralNetworkRuntime
21
22**Since**: 11
23
24**Related module**: [NeuralNetworkRuntime](capi-neuralnetworkruntime.md)
25
26## Summary
27
28### Functions
29
30| Name                                                        | Description                                                        |
31| ------------------------------------------------------------ | ------------------------------------------------------------ |
32| [OH_NNCompilation *OH_NNCompilation_Construct(const OH_NNModel *model)](#oh_nncompilation_construct) | Creates an [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance.<br>After an [OH_NNModel](capi-neuralnetworkruntime-oh-nnmodel.md) instance is created, the OH_NNCompilation module passes the model instance to the underlying device for compilation.|
33| [OH_NNCompilation *OH_NNCompilation_ConstructWithOfflineModelFile(const char *modelPath)](#oh_nncompilation_constructwithofflinemodelfile) | Creates a model compilation instance based on an offline model file.<br>This API conflicts with the one that utilizes an online model or offline model buffer. You can select only one of the three compilation APIs.<br>An offline model is one that is offline-compiled by the model converter provided by hardware vendors. As such, it can only be used on designated devices. However, the compilation time of an offline model is generally much shorter than that of the graph construction instance [OH_NNModel](capi-neuralnetworkruntime-oh-nnmodel.md).<br>During development, offline compilation needs to be performed and offline models need to be deployed in application packages.|
34| [OH_NNCompilation *OH_NNCompilation_ConstructWithOfflineModelBuffer(const void *modelBuffer, size_t modelSize)](#oh_nncompilation_constructwithofflinemodelbuffer) | Creates a model compilation instance based on the offline model buffer.<br>This API conflicts with the one that utilizes an online model or offline model file. You can select only one of the three compilation APIs.<br>The returned [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance only saves the **modelBuffer** pointer, but does not copy its data. The **modelBuffer** instance should not be released before the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance is destroyed.|
35| [OH_NNCompilation *OH_NNCompilation_ConstructForCache()](#oh_nncompilation_constructforcache) | Creates an empty model compilation instance for later recovery from the model cache.<br>For details about the model cache, see [OH_NNCompilation_SetCache](capi-neural-network-core-h.md#oh_nncompilation_setcache).<br>The time required for model recovery from the model cache is less than the time required for compilation using [OH_NNModel](capi-neuralnetworkruntime-oh-nnmodel.md).<br>Call [OH_NNCompilation_SetCache](capi-neural-network-core-h.md#oh_nncompilation_setcache) or [OH_NNCompilation_ImportCacheFromBuffer](capi-neural-network-core-h.md#oh_nncompilation_importcachefrombuffer) and then [OH_NNCompilation_Build](capi-neural-network-core-h.md#oh_nncompilation_build) to complete model recovery.|
36| [OH_NN_ReturnCode OH_NNCompilation_ExportCacheToBuffer(OH_NNCompilation *compilation,const void *buffer,size_t length,size_t *modelSize)](#oh_nncompilation_exportcachetobuffer) | Writes the model cache to the specified buffer.<br>For details about the model cache, see [OH_NNCompilation_SetCache](capi-neural-network-core-h.md#oh_nncompilation_setcache).<br>The model cache stores the result of [OH_NNCompilation_Build](capi-neural-network-core-h.md#oh_nncompilation_build). Therefore, this API must be called after [OH_NNCompilation_Build](capi-neural-network-core-h.md#oh_nncompilation_build) is complete.|
37| [OH_NN_ReturnCode OH_NNCompilation_ImportCacheFromBuffer(OH_NNCompilation *compilation,const void *buffer,size_t modelSize)](#oh_nncompilation_importcachefrombuffer) | Reads the model cache from the specified buffer.<br>For details about the model cache, see [OH_NNCompilation_SetCache](capi-neural-network-core-h.md#oh_nncompilation_setcache).<br>After calling [OH_NNCompilation_ImportCacheFromBuffer](capi-neural-network-core-h.md#oh_nncompilation_importcachefrombuffer), call [OH_NNCompilation_Build](capi-neural-network-core-h.md#oh_nncompilation_build) to complete model recovery.<br>The **compilation** instance only stores the **buffer** pointer, but does not copy its data. You cannot release the **buffer** before the **compilation** instance is destroyed.|
38| [OH_NN_ReturnCode OH_NNCompilation_AddExtensionConfig(OH_NNCompilation *compilation,const char *configName,const void *configValue,const size_t configValueSize)](#oh_nncompilation_addextensionconfig) | Adds extended configurations for custom device attributes.<br>Some devices have their own attributes, which have not been enabled in NNRt. This API helps you to set custom attributes for these devices.<br>You need to obtain their names and values from the device vendor's documentation and add them to the model compilation instance. These attributes are passed directly to the device driver. If the device driver cannot parse the attributes, an error is returned.<br>After [OH_NNCompilation_Build](capi-neural-network-core-h.md#oh_nncompilation_build) is called, **configName** and **configValue** can be released.|
39| [OH_NN_ReturnCode OH_NNCompilation_SetDevice(OH_NNCompilation *compilation, size_t deviceID)](#oh_nncompilation_setdevice) | Sets the device for model compilation and computing.<br>In the compilation phase, you need to specify the device for model compilation and computing. Call [OH_NNDevice_GetAllDevicesID](capi-neural-network-core-h.md#oh_nndevice_getalldevicesid) to obtain available device IDs. Then, call [OH_NNDevice_GetType](capi-neural-network-core-h.md#oh_nndevice_gettype) and [OH_NNDevice_GetType](capi-neural-network-core-h.md#oh_nndevice_gettype) to obtain device information and pass target device IDs to this API.|
40| [OH_NN_ReturnCode OH_NNCompilation_SetCache(OH_NNCompilation *compilation, const char *cachePath, uint32_t version)](#oh_nncompilation_setcache) | Sets the cache directory and version for model compilation.                              |
41| [OH_NN_ReturnCode OH_NNCompilation_SetPerformanceMode(OH_NNCompilation *compilation,OH_NN_PerformanceMode performanceMode)](#oh_nncompilation_setperformancemode) | Sets the performance mode for model computing.<br>NNRt allows you to set the performance mode for model computing to meet the requirements of low power consumption and ultimate performance. If this API is not called to set the performance mode in the compilation phase, the model compilation instance assigns the [OH_NN_PERFORMANCE_NONE](capi-neural-network-runtime-type-h.md#oh_nn_performancemode) mode for the model by default. In this case, the device performs computing in the default performance mode. If this API is called on a device that does not support setting of the performance mode, the error code [OH_NN_UNAVAILABLE_DEVICE](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned.|
42| [OH_NN_ReturnCode OH_NNCompilation_SetPriority(OH_NNCompilation *compilation, OH_NN_Priority priority)](#oh_nncompilation_setpriority) | Sets the priority for model computing.<br>NNRt allows you to set computing priorities for models. The priorities apply only to models created by the process with the same UID. The settings will not affect models created by processes with different UIDs on different devices. If this API is called on a device that does not support priority setting, the error code [OH_NN_UNAVAILABLE_DEVICE](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned.|
43| [OH_NN_ReturnCode OH_NNCompilation_EnableFloat16(OH_NNCompilation *compilation, bool enableFloat16)](#oh_nncompilation_enablefloat16) | Enables float16 for computing.<br>By default, the floating-point model uses float32 for computing. If this API is called on a device that supports float16, floating-point model that supports float32 will use float16 for computing, so to reduce memory usage and execution time. This option is invalid for fixed-point models, for example, fixed-point models of the int8 type.<br>If this API is called on a device that does not support float16, the error code [OH_NN_UNAVAILABLE_DEVICE](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned.|
44| [OH_NN_ReturnCode OH_NNCompilation_Build(OH_NNCompilation *compilation)](#oh_nncompilation_build) | Performs model compilation.<br>After the compilation configuration is complete, call this API to start model compilation. The model compilation instance pushes the model and compilation options to the device for compilation.<br>After this API is called, additional compilation operations cannot be performed. If [OH_NNCompilation_SetDevice](capi-neural-network-core-h.md#oh_nncompilation_setdevice), [OH_NNCompilation_SetCache](capi-neural-network-core-h.md#oh_nncompilation_setcache), [OH_NNCompilation_SetPerformanceMode](capi-neural-network-core-h.md#oh_nncompilation_setperformancemode), [OH_NNCompilation_SetPriority](capi-neural-network-core-h.md#oh_nncompilation_setpriority), or [OH_NNCompilation_EnableFloat16](capi-neural-network-core-h.md#oh_nncompilation_enablefloat16) is called, [OH_NN_OPERATION_FORBIDDEN](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned.|
45| [void OH_NNCompilation_Destroy(OH_NNCompilation **compilation)](#oh_nncompilation_destroy) | Destroys an [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance.<br>This API needs to be used to destroy the model compilation instances created by calling [OH_NNCompilation_Construct](capi-neural-network-core-h.md#oh_nncompilation_construct), [OH_NNCompilation_ConstructWithOfflineModelFile](capi-neural-network-core-h.md#oh_nncompilation_constructwithofflinemodelfile), [OH_NNCompilation_ConstructWithOfflineModelBuffer](capi-neural-network-core-h.md#oh_nncompilation_constructwithofflinemodelbuffer) or [OH_NNCompilation_ConstructForCache](capi-neural-network-core-h.md#oh_nncompilation_constructforcache). If **compilation** or **\*compilation** is a null pointer, this API only prints warning logs but does not perform the destruction operation.|
46| [NN_TensorDesc *OH_NNTensorDesc_Create()](#oh_nntensordesc_create) | Creates an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>[NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) describes various tensor attributes, such as the name, data type, shape, and format.<br>You can create an [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance based on the input [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance by calling the following APIs:<br>[OH_NNTensor_Create](capi-neural-network-core-h.md#oh_nntensor_create) <br>[OH_NNTensor_CreateWithSize](capi-neural-network-core-h.md#oh_nntensor_createwithsize) <br>[OH_NNTensor_CreateWithFd](capi-neural-network-core-h.md#oh_nntensor_createwithfd) <br>This API copies the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance to [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md). This way, you can create multiple [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instances with the same [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance. If the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance is no longer needed, call [OH_NNTensorDesc_Destroy](capi-neural-network-core-h.md#oh_nntensordesc_destroy) to destroy it.|
47| [OH_NN_ReturnCode OH_NNTensorDesc_Destroy(NN_TensorDesc **tensorDesc)](#oh_nntensordesc_destroy) | Destroys an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>If the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance is no longer needed, call this API to destroy it. Otherwise, a memory leak occurs.<br>If <b>tensorDesc</b> or <b>\*tensorDesc</b> is a null pointer, an error is returned but the object will not be destroyed.|
48| [OH_NN_ReturnCode OH_NNTensorDesc_SetName(NN_TensorDesc *tensorDesc, const char *name)](#oh_nntensordesc_setname) | Sets the name of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>After an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance is created, call this API to set the tensor name. The value of \*name is a C-style string ending with \0.<br>If <b>tensorDesc</b> or <b>name</b> is a null pointer, an error is returned.|
49| [OH_NN_ReturnCode OH_NNTensorDesc_GetName(const NN_TensorDesc *tensorDesc, const char **name)](#oh_nntensordesc_getname) | Obtains the name of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>You can use this API to obtain the name of the specified [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance. The value of **\*name** is a C-style string ending with **\0**.<br>If <b>tensorDesc</b> or <b>name</b> is a null pointer, an error is returned. As an output parameter, **\*name** must be a null pointer. Otherwise, an error is returned.<br>For example, you should define **char\* tensorName = NULL** and pass **&tensorName** as a parameter of **name**.<br>You do not need to release the memory of **name**. When **tensorDesc** is destroyed, it is automatically released.|
50| [OH_NN_ReturnCode OH_NNTensorDesc_SetDataType(NN_TensorDesc *tensorDesc, OH_NN_DataType dataType)](#oh_nntensordesc_setdatatype) | Sets the data type of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>After an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance is created, call this API to set the tensor data type.<br>If **tensorDesc** is a null pointer, an error is returned.|
51| [OH_NN_ReturnCode OH_NNTensorDesc_GetDataType(const NN_TensorDesc *tensorDesc, OH_NN_DataType *dataType)](#oh_nntensordesc_getdatatype) | Obtains the data type of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>You can use this API to obtain the data type of the specified [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>If <b>tensorDesc</b> or <b>dataType</b> is a null pointer, an error is returned.|
52| [OH_NN_ReturnCode OH_NNTensorDesc_SetShape(NN_TensorDesc *tensorDesc, const int32_t *shape, size_t shapeLength)](#oh_nntensordesc_setshape) | Sets the data shape of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>After an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance is created, call this API to set the tensor shape.<br>If <b>tensorDesc</b> or <b>shape</b> is a null pointer, or <b>shapeLength</b> is 0, an error is returned.|
53| [OH_NN_ReturnCode OH_NNTensorDesc_GetShape(const NN_TensorDesc *tensorDesc, int32_t **shape, size_t *shapeLength)](#oh_nntensordesc_getshape) | Obtains the shape of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>You can use this API to obtain the shape of the specified [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>If <b>tensorDesc</b>, <b>shape</b>, or <b>shapeLength</b> is a null pointer, an error is returned. As an output parameter, **\*shape** must be a null pointer. Otherwise, an error is returned.<br>For example, you should define **int32_t\* tensorShape = NULL** and pass **&tensorShape** as a parameter of **shape**.<br>You do not need to release the memory of **shape**. When **tensorDesc** is destroyed, it is automatically released.|
54| [OH_NN_ReturnCode OH_NNTensorDesc_SetFormat(NN_TensorDesc *tensorDesc, OH_NN_Format format)](#oh_nntensordesc_setformat) | Sets the data format of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>After an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance is created, call this API to set [OH_NN_Format](capi-neural-network-runtime-type-h.md#oh_nn_format) of the tensor.<br>If **tensorDesc** is a null pointer, an error is returned.|
55| [OH_NN_ReturnCode OH_NNTensorDesc_GetFormat(const NN_TensorDesc *tensorDesc, OH_NN_Format *format)](#oh_nntensordesc_getformat) | Obtains the data format of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>You can use this API to obtain [OH_NN_Format](capi-neural-network-runtime-type-h.md#oh_nn_format) of the specified [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>If <b>tensorDesc</b> or <b>format</b> is a null pointer, an error is returned.|
56| [OH_NN_ReturnCode OH_NNTensorDesc_GetElementCount(const NN_TensorDesc *tensorDesc, size_t *elementCount)](#oh_nntensordesc_getelementcount) | Obtains the number of elements in an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>You can use this API to obtain the number of elements in the specified [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance. To obtain the size of tensor data, call [OH_NNTensorDesc_GetByteSize](capi-neural-network-core-h.md#oh_nntensordesc_getbytesize).<br>If the tensor shape is dynamically variable, this API returns an error code and **elementCount** is **0**.<br>If <b>tensorDesc</b> or <b>elementCount</b> is a null pointer, an error is returned.|
57| [OH_NN_ReturnCode OH_NNTensorDesc_GetByteSize(const NN_TensorDesc *tensorDesc, size_t *byteSize)](#oh_nntensordesc_getbytesize) | Obtains the number of bytes occupied by the tensor data obtained through calculation based on the shape and data type of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>You can use this API to obtain the number of bytes occupied by the tensor data obtained through calculation based on the shape and data type of the specified [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>If the tensor shape is dynamically variable, this API returns an error code and **byteSize** is **0**.<br>To obtain the number of elements in the tensor data, call [OH_NNTensorDesc_GetElementCount](capi-neural-network-core-h.md#oh_nntensordesc_getelementcount).<br>If <b>tensorDesc</b> or <b>byteSize</b> is a null pointer, an error is returned.|
58| [NN_Tensor *OH_NNTensor_Create(size_t deviceID, NN_TensorDesc *tensorDesc)](#oh_nntensor_create) | Creates an [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance from [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md).<br>This API uses [OH_NNTensorDesc_GetByteSize](capi-neural-network-core-h.md#oh_nntensordesc_getbytesize) to calculate the number of bytes of tensor data and allocate device memory for it. The device driver directly obtains tensor data in zero-copy mode.<br>This API copies **tensorDesc** to [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md). Therefore, if **tensorDesc** is no longer needed, destroy it by calling [OH_NNTensorDesc_Destroy](capi-neural-network-core-h.md#oh_nntensordesc_destroy).<br>If the tensor shape is dynamic, an error is returned.<br><b>deviceID</b> indicates the selected device. If the value is **0**, the first device in the current device list is used by default.<br>**tensorDesc is mandatory**. If it is a null pointer, an error is returned.<br>If the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance is no longer needed, destroy it by calling [OH_NNTensor_Destroy](capi-neural-network-core-h.md#oh_nntensor_destroy).|
59| [NN_Tensor *OH_NNTensor_CreateWithSize(size_t deviceID, NN_TensorDesc *tensorDesc, size_t size)](#oh_nntensor_createwithsize) | Creates an [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance based on the specified memory size and [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>This API uses **size** as the number of bytes of tensor data and allocates device memory to it. The device driver directly obtains tensor data in zero-copy mode.<br>Note that this API copies **tensorDesc** to [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md). Therefore, if **tensorDesc** is no longer needed, destroy it by calling [OH_NNTensorDesc_Destroy](capi-neural-network-core-h.md#oh_nntensordesc_destroy).<br>**deviceID** indicates the ID of the selected device. If the value is **0**, the first device is used.<br>**tensorDesc** is mandatory. If it is a null pointer, an error is returned.<br>The value of **size** must be greater than or equal to the number of bytes occupied by **tensorDesc**, which can be obtained by calling [OH_NNTensorDesc_GetByteSize](capi-neural-network-core-h.md#oh_nntensordesc_getbytesize). Otherwise, an error is returned. If the tensor shape is dynamic, <b>size</b> is not checked.<br>If the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance is no longer needed, destroy it by calling [OH_NNTensor_Destroy](capi-neural-network-core-h.md#oh_nntensor_destroy).|
60| [NN_Tensor *OH_NNTensor_CreateWithFd(size_t deviceID,NN_TensorDesc *tensorDesc,int fd,size_t size,size_t offset)](#oh_nntensor_createwithfd) | Creates an [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance based on the specified file descriptor of the shared memory and [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.<br>This API reuses the shared memory corresponding to **fd**, which may source from another [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance. When the tensor created by calling [OH_NNTensor_Destroy](capi-neural-network-core-h.md#oh_nntensor_destroy) is destroyed, the memory storing the tensor data is not released.<br>Note that this API copies **tensorDesc** to [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md). Therefore, if **tensorDesc** is no longer needed, destroy it by calling [OH_NNTensorDesc_Destroy](capi-neural-network-core-h.md#oh_nntensordesc_destroy).<br><b>deviceID</b> indicates the selected device. If the value is **0**, the first device in the current device list is used by default.<br>**tensorDesc** is mandatory. If the pointer is null, an error is returned.<br>If the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance is no longer needed, destroy it by calling [OH_NNTensor_Destroy](capi-neural-network-core-h.md#oh_nntensor_destroy).|
61| [OH_NN_ReturnCode OH_NNTensor_Destroy(NN_Tensor **tensor)](#oh_nntensor_destroy) | Destroys an [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance.<br>If the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance is no longer needed, call this API to destroy it. Otherwise, a memory leak occurs.<br>If <b>tensor</b> or <b>*tensor</b> is a null pointer, an error is returned but the object will not be destroyed.|
62| [NN_TensorDesc *OH_NNTensor_GetTensorDesc(const NN_Tensor *tensor)](#oh_nntensor_gettensordesc) | Obtains an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance of [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md).<br>You can use this API to obtain the pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance of the specified [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance.<br>You can obtain tensor attributes of various types from the returned [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance, such as the name, data format, data type, and shape.<br>You should not destroy the returned [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance because it points to an internal instance of [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md). Otherwise, once [OH_NNTensor_Destroy](capi-neural-network-core-h.md#oh_nntensor_destroy) is called, a crash may occur due to double memory release.<br>If <b>Tensor</b> is a null pointer, a null pointer is returned.|
63| [void *OH_NNTensor_GetDataBuffer(const NN_Tensor *tensor)](#oh_nntensor_getdatabuffer) | Obtains the memory address of [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) data.<br>You can read/write data from/to tensor data memory. The data memory is mapped from the shared memory on the device. Therefore, the device driver can directly obtain tensor data in zero-copy mode.<br>Only tensor data in the [offset, size) segment in the corresponding shared memory can be used. **offset** indicates the offset in the shared memory and can be obtained by calling [OH_NNTensor_GetOffset](capi-neural-network-core-h.md#oh_nntensor_getoffset). **size** indicates the total size of the shared memory, which can be obtained by calling [OH_NNTensor_GetSize](capi-neural-network-core-h.md#oh_nntensor_getsize).<br>If <b>Tensor</b> is a null pointer, a null pointer is returned.|
64| [OH_NN_ReturnCode OH_NNTensor_GetFd(const NN_Tensor *tensor, int *fd)](#oh_nntensor_getfd) | Obtains the file descriptor (**fd**) of the shared memory where [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) data is stored.<br>**fd** corresponds to a device shared memory and can be used by another [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) through [OH_NNTensor_CreateWithFd](capi-neural-network-core-h.md#oh_nntensor_createwithfd).<br>If <b>tensor</b> or <b>fd</b> is a null pointer, an error is returned.|
65| [OH_NN_ReturnCode OH_NNTensor_GetSize(const NN_Tensor *tensor, size_t *size)](#oh_nntensor_getsize) | Obtains the size of the shared memory where the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) data is stored.<br>The value of **size** is the same as that of [OH_NNTensor_CreateWithSize](capi-neural-network-core-h.md#oh_nntensor_createwithsize) and [OH_NNTensor_CreateWithFd](capi-neural-network-core-h.md#oh_nntensor_createwithfd). However, for a tensor created by using [OH_NNTensor_Create](capi-neural-network-core-h.md#oh_nntensor_create), the value of **size** is equal to the number of bytes actually occupied by the tensor data, which can be obtained by calling [OH_NNTensorDesc_GetByteSize](capi-neural-network-core-h.md#oh_nntensordesc_getbytesize).<br> Only tensor data in the [offset, size) segment in the shared memory corresponding to the **fd** can be used. **offset** indicates the offset in the shared memory and can be obtained by calling [OH_NNTensor_GetOffset](capi-neural-network-core-h.md#oh_nntensor_getoffset). **size** indicates the total size of the shared memory, which can be obtained by calling [OH_NNTensor_GetSize](capi-neural-network-core-h.md#oh_nntensor_getsize).<br>If <b>tensor</b> or <b>size</b> is a null pointer, an error is returned.|
66| [OH_NN_ReturnCode OH_NNTensor_GetOffset(const NN_Tensor *tensor, size_t *offset)](#oh_nntensor_getoffset) | Obtains the offset of [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) data in the shared memory.<br>**offset** indicates the offset of tensor data in the corresponding shared memory. It can be used by another [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) together with **fd** and **size** of the shared memory through [OH_NNTensor_CreateWithFd](capi-neural-network-core-h.md#oh_nntensor_createwithfd).<br> Only tensor data in the [offset, size) segment in the shared memory corresponding to the **fd** can be used. **offset** indicates the offset in the shared memory and can be obtained by calling [OH_NNTensor_GetOffset](capi-neural-network-core-h.md#oh_nntensor_getoffset). **size** indicates the total size of the shared memory, which can be obtained by calling [OH_NNTensor_GetSize](capi-neural-network-core-h.md#oh_nntensor_getsize).<br>If <b>tensor</b> or <b>offset</b> is a null pointer, an error is returned.|
67| [OH_NNExecutor *OH_NNExecutor_Construct(OH_NNCompilation *compilation)](#oh_nnexecutor_construct) | Creates an [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) executor instance.<br>This API constructs a model inference executor for a device based on the specified [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance. Use [OH_NNExecutor_SetInput](capi-neural-network-runtime-h.md#oh_nnexecutor_setinput) to set the model input data. After the input data is set, call [OH_NNExecutor_Run](capi-neural-network-runtime-h.md#oh_nnexecutor_run) to perform inference and then call [OH_NNExecutor_SetOutput](capi-neural-network-runtime-h.md#oh_nnexecutor_setoutput) to obtain the computing result.  After an [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) instance is created through the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance, destroy the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance if it is no longer needed.|
68| [OH_NN_ReturnCode OH_NNExecutor_GetOutputShape(OH_NNExecutor *executor,uint32_t outputIndex,int32_t **shape,uint32_t *shapeLength)](#oh_nnexecutor_getoutputshape) | Obtains the dimension information about the output tensor.                                    |
69| [void OH_NNExecutor_Destroy(OH_NNExecutor **executor)](#oh_nnexecutor_destroy) | Destroys an executor instance to release the memory occupied by it.<br>This API needs to be called to release the executor instance created by calling [OH_NNExecutor_Construct](capi-neural-network-core-h.md#oh_nnexecutor_construct). Otherwise, a memory leak occurs.  If **executor** or **\*executor** is a null pointer, this API only prints the warning log and does not execute the release logic.|
70| [OH_NN_ReturnCode OH_NNExecutor_GetInputCount(const OH_NNExecutor *executor, size_t *inputCount)](#oh_nnexecutor_getinputcount) | Obtains the number of input tensors.<br>You can obtain the number of input tensors from **executor**, and then use [OH_NNExecutor_CreateInputTensorDesc](capi-neural-network-core-h.md#oh_nnexecutor_createinputtensordesc) to create a tensor description based on the specified tensor index.|
71| [OH_NN_ReturnCode OH_NNExecutor_GetOutputCount(const OH_NNExecutor *executor, size_t *outputCount)](#oh_nnexecutor_getoutputcount) | Obtains the number of output tensors.<br>You can obtain the number of output tensors from **executor**, and then use [OH_NNExecutor_CreateOutputTensorDesc](capi-neural-network-core-h.md#oh_nnexecutor_createoutputtensordesc) to create a tensor description based on the specified tensor index.|
72| [NN_TensorDesc *OH_NNExecutor_CreateInputTensorDesc(const OH_NNExecutor *executor, size_t index)](#oh_nnexecutor_createinputtensordesc) | Creates the description of an input tensor based on the specified index value.<br>The description contains all types of attribute values of the tensor. If the value of **index** reaches or exceeds the number of input tensors, an error is returned. You can obtain the number of input tensors by calling [OH_NNExecutor_GetInputCount](capi-neural-network-core-h.md#oh_nnexecutor_getinputcount).|
73| [NN_TensorDesc *OH_NNExecutor_CreateOutputTensorDesc(const OH_NNExecutor *executor, size_t index)](#oh_nnexecutor_createoutputtensordesc) | Creates the description of an output tensor based on the specified index value.<br>The description contains all types of attribute values of the tensor. If the value of **index** reaches or exceeds the number of output tensors, an error is returned. You can obtain the number of output tensors by calling [OH_NNExecutor_GetOutputCount](capi-neural-network-core-h.md#oh_nnexecutor_getoutputcount).|
74| [OH_NN_ReturnCode OH_NNExecutor_GetInputDimRange(const OH_NNExecutor *executor,size_t index,size_t **minInputDims,size_t **maxInputDims,size_t *shapeLength)](#oh_nnexecutor_getinputdimrange) | Obtains the dimension range of all input tensors.<br>If the input tensor has a dynamic shape, the dimension range supported by the tensor may vary according to device. You can call this API to obtain the dimension range supported by the current device. **\*minInputDims** saves the minimum dimension of the specified input tensor (the number of dimensions matches the shape), while **\*maxInputDims** saves the maximum dimension.|
75| [OH_NN_ReturnCode OH_NNExecutor_SetOnRunDone(OH_NNExecutor *executor, NN_OnRunDone onRunDone)](#oh_nnexecutor_setonrundone) | Sets the callback processing function invoked when the asynchronous inference ends.<br>For details about the callback function, see [NN_OnRunDone](capi-neural-network-runtime-type-h.md#nn_onrundone).|
76| [OH_NN_ReturnCode OH_NNExecutor_SetOnServiceDied(OH_NNExecutor *executor, NN_OnServiceDied onServiceDied)](#oh_nnexecutor_setonservicedied) | Sets the callback processing function invoked when the device driver service terminates unexpectedly during asynchronous inference.<br>For details about the callback function, see [NN_OnServiceDied](capi-neural-network-runtime-type-h.md#nn_onservicedied).|
77| [OH_NN_ReturnCode OH_NNExecutor_RunSync(OH_NNExecutor *executor,NN_Tensor *inputTensor[],size_t inputCount,NN_Tensor *outputTensor[],size_t outputCount)](#oh_nnexecutor_runsync) | Performs synchronous inference.<br>You need to create input and output tensors by calling [OH_NNTensor_Create](capi-neural-network-core-h.md#oh_nntensor_create), [OH_NNTensor_CreateWithSize](capi-neural-network-core-h.md#oh_nntensor_createwithsize), or [OH_NNTensor_CreateWithFd](capi-neural-network-core-h.md#oh_nntensor_createwithfd). Then, use [OH_NNTensor_GetDataBuffer](capi-neural-network-core-h.md#oh_nntensor_getdatabuffer) to obtain the pointer to tensor data and copy the input data to it. The executor performs model inference, generates the inference result, and writes the result to the output tensor.<br>If the output tensor has a dynamic shape, you can obtain the actual shape of the output tensor by calling [OH_NNExecutor_GetOutputShape](capi-neural-network-core-h.md#oh_nnexecutor_getoutputshape). Alternatively, obtain the tensor description from the input tensor by calling [OH_NNTensor_GetTensorDesc](capi-neural-network-core-h.md#oh_nntensor_gettensordesc), and then obtain the actual shape by calling [OH_NNTensorDesc_GetShape](capi-neural-network-core-h.md#oh_nntensordesc_getshape).|
78| [OH_NN_ReturnCode OH_NNExecutor_RunAsync(OH_NNExecutor *executor,NN_Tensor *inputTensor[],size_t inputCount,NN_Tensor *outputTensor[],size_t outputCount,int32_t timeout,void *userData)](#oh_nnexecutor_runasync) | Performs asynchronous inference.<br>You need to create input and output tensors by calling [OH_NNTensor_Create](capi-neural-network-core-h.md#oh_nntensor_create), [OH_NNTensor_CreateWithSize](capi-neural-network-core-h.md#oh_nntensor_createwithsize), or [OH_NNTensor_CreateWithFd](capi-neural-network-core-h.md#oh_nntensor_createwithfd). Then, use [OH_NNTensor_GetDataBuffer](capi-neural-network-core-h.md#oh_nntensor_getdatabuffer) to obtain the pointer to tensor data and copy the input data to it. The executor performs model inference, generates the inference result, and writes the result to the output tensor.<br>If the output tensor has a dynamic shape, you can obtain the actual shape of the output tensor by calling [OH_NNExecutor_GetOutputShape](capi-neural-network-core-h.md#oh_nnexecutor_getoutputshape). Alternatively, obtain the tensor description from the input tensor by calling [OH_NNTensor_GetTensorDesc](capi-neural-network-core-h.md#oh_nntensor_gettensordesc), and then obtain the actual shape by calling [OH_NNTensorDesc_GetShape](capi-neural-network-core-h.md#oh_nntensordesc_getshape).<br>This API works in non-blocking mode and returns the result immediately after being called. You can obtain the inference result and execution return status through the [NN_OnRunDone](capi-neural-network-runtime-type-h.md#nn_onrundone) callback. If the device driver service stops abnormally during execution, you can use the [NN_OnServiceDied](capi-neural-network-runtime-type-h.md#nn_onservicedied) callback for exception processing.<br>You can set the NN_OnRunDone](capi-neural-network-runtime-type-h.md#nn_onrundone) and [NN_OnServiceDied](capi-neural-network-runtime-type-h.md#nn_onservicedied) callbacks by calling [OH_NNExecutor_SetOnRunDone](capi-neural-network-core-h.md#oh_nnexecutor_setonrundone) and [OH_NNExecutor_SetOnServiceDied](capi-neural-network-core-h.md#oh_nnexecutor_setonservicedied).<br>If the inference times out, it is terminated immediately and the error code [OH_NN_TIMEOUT](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned through the [NN_OnRunDone](capi-neural-network-runtime-type-h.md#nn_onrundone) callback.<br>**userData** is the identifier used to distinguish different asynchronous inferences and is returned as the first parameter in the callback. You can use any data that can distinguish different inferences as the identifier.|
79| [OH_NN_ReturnCode OH_NNDevice_GetAllDevicesID(const size_t **allDevicesID, uint32_t *deviceCount)](#oh_nndevice_getalldevicesid) | Obtains the ID of the device connected to NNRt.<br>Each device has a unique and fixed ID, which is returned through a uint32_t array.<br>When device IDs are returned through the size_t array, each element of the array is the ID of a single device. Internal management is used for array memory. The data pointer remains valid before this API is called next time.|
80| [OH_NN_ReturnCode OH_NNDevice_GetName(size_t deviceID, const char **name)](#oh_nndevice_getname) | Obtains the name of the specified device.<br>**deviceID** specifies the device ID used to obtain the device name. The device ID needs to be obtained by calling [OH_NNDevice_GetAllDevicesID](capi-neural-network-core-h.md#oh_nndevice_getalldevicesid). If the value of **deviceID** is **0**, the first device in the device list is used by default.  **\*name** is a C-style string ended with **\0**.<br>**\*name** must be a null pointer. Otherwise, the error code [OH_NN_INVALID_PARAMETER](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned. For example, you should define **char\* deviceName = NULL**, and then pass **&deviceName** as an input parameter.|
81| [OH_NN_ReturnCode OH_NNDevice_GetType(size_t deviceID, OH_NN_DeviceType *deviceType)](#oh_nndevice_gettype) | Obtains the type of the specified device.<br>**deviceID** specifies the device ID used to obtain the device type. If the value of **deviceID** is **0**, the first device in the device list is used by default. Currently, the following device types are supported:<br>- **OH_NN_CPU**: CPU device.<br>- **OH_NN_GPU**: GPU device.<br>- **OH_NN_ACCELERATOR**: machine learning dedicated accelerator.<br>- **OH_NN_OTHERS**: other device types.|
82
83## Function Description
84
85### OH_NNCompilation_Construct()
86
87```
88OH_NNCompilation *OH_NNCompilation_Construct(const OH_NNModel *model)
89```
90
91**Description**
92
93Creates an [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance.
94
95After an [OH_NNModel](capi-neuralnetworkruntime-oh-nnmodel.md) instance is created, the OH_NNCompilation module passes the model instance to the underlying device for compilation.
96
97This API creates an [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance based on the passed [OH_NNModel](capi-neuralnetworkruntime-oh-nnmodel.md) instance. The [OH_NNCompilation_SetDevice](capi-neural-network-core-h.md#oh_nncompilation_setdevice) API is called to specify the device for model compilation, and the [OH_NNCompilation_Build](capi-neural-network-core-h.md#oh_nncompilation_build) API is then called to complete model compilation.
98
99In addition to computing device selection, the OH_NNCompilation module supports features such as model cache, performance preference, priority setting, and float16 computing, which can be implemented by the following APIs:
100
101[OH_NNCompilation_SetCache](capi-neural-network-core-h.md#oh_nncompilation_setcache)
102
103[OH_NNCompilation_SetPerformanceMode](capi-neural-network-core-h.md#oh_nncompilation_setperformancemode)
104
105[OH_NNCompilation_SetPriority](capi-neural-network-core-h.md#oh_nncompilation_setpriority)
106
107[OH_NNCompilation_EnableFloat16](capi-neural-network-core-h.md#oh_nncompilation_enablefloat16)
108
109After this API is called to create an [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance, the [OH_NNModel](capi-neuralnetworkruntime-oh-nnmodel.md) instance can be released.
110
111**Since**: 9
112
113
114**Parameters**
115
116| Name| Description|
117| -- | -- |
118| [const OH_NNModel](capi-neuralnetworkruntime-oh-nnmodel.md) *model | Pointer to the [OH_NNModel](capi-neuralnetworkruntime-oh-nnmodel.md) instance.|
119
120**Returns**
121
122| Type| Description|
123| -- | -- |
124| [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) * | Pointer to the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance. If the operation fails, **NULL** is returned.|
125
126### OH_NNCompilation_ConstructWithOfflineModelFile()
127
128```
129OH_NNCompilation *OH_NNCompilation_ConstructWithOfflineModelFile(const char *modelPath)
130```
131
132**Description**
133
134Creates a model compilation instance based on an offline model file.
135
136This API conflicts with the one that utilizes an online model or offline model buffer. You can select only one of the three compilation APIs.
137
138An offline model is one that is offline-compiled by the model converter provided by hardware vendors. As such, it can only be used on designated devices. However, the compilation time of an offline model is generally much shorter than that of the graph construction instance [OH_NNModel](capi-neuralnetworkruntime-oh-nnmodel.md).
139
140During development, offline compilation needs to be performed and offline models need to be deployed in application packages.
141
142**Since**: 11
143
144
145**Parameters**
146
147| Name| Description|
148| -- | -- |
149| const char *modelPath | Path of the offline model file.|
150
151**Returns**
152
153| Type| Description|
154| -- | -- |
155| [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) * | Pointer to the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance. If the operation fails, **NULL** is returned.|
156
157### OH_NNCompilation_ConstructWithOfflineModelBuffer()
158
159```
160OH_NNCompilation *OH_NNCompilation_ConstructWithOfflineModelBuffer(const void *modelBuffer, size_t modelSize)
161```
162
163**Description**
164
165Creates a model compilation instance based on the offline model buffer.
166
167This API conflicts with the one that utilizes an online model or offline model file. You can select only one of the three compilation APIs.
168
169The returned [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance only saves the **modelBuffer** pointer, but does not copy its data. The **modelBuffer** instance should not be released before the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance is destroyed.
170
171**Since**: 11
172
173
174**Parameters**
175
176| Name| Description|
177| -- | -- |
178| const void *modelBuffer | Memory for storing offline model files.|
179| size_t modelSize | Memory size of the offline model.|
180
181**Returns**
182
183| Type| Description|
184| -- | -- |
185| [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) * | Pointer to the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance. If the operation fails, **NULL** is returned.|
186
187### OH_NNCompilation_ConstructForCache()
188
189```
190OH_NNCompilation *OH_NNCompilation_ConstructForCache()
191```
192
193**Description**
194
195Creates an empty model compilation instance for later recovery from the model cache.
196
197For details about the model cache, see [OH_NNCompilation_SetCache](capi-neural-network-core-h.md#oh_nncompilation_setcache).
198
199The time required for model recovery from the model cache is less than the time required for compilation using [OH_NNModel](capi-neuralnetworkruntime-oh-nnmodel.md).
200
201Call [OH_NNCompilation_SetCache](capi-neural-network-core-h.md#oh_nncompilation_setcache) or [OH_NNCompilation_ImportCacheFromBuffer](capi-neural-network-core-h.md#oh_nncompilation_importcachefrombuffer) and then [OH_NNCompilation_Build](capi-neural-network-core-h.md#oh_nncompilation_build) to complete model recovery.
202
203**Since**: 11
204
205**Returns**
206
207| Type| Description|
208| -- | -- |
209| [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) * | Pointer to the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance. If the operation fails, **NULL** is returned.|
210
211### OH_NNCompilation_ExportCacheToBuffer()
212
213```
214OH_NN_ReturnCode OH_NNCompilation_ExportCacheToBuffer(OH_NNCompilation *compilation,const void *buffer,size_t length,size_t *modelSize)
215```
216
217**Description**
218
219Writes the model cache to the specified buffer.
220
221For details about the model cache, see [OH_NNCompilation_SetCache](capi-neural-network-core-h.md#oh_nncompilation_setcache).
222
223The model cache stores the result of [OH_NNCompilation_Build](capi-neural-network-core-h.md#oh_nncompilation_build). Therefore, this API must be called after [OH_NNCompilation_Build](capi-neural-network-core-h.md#oh_nncompilation_build) is complete.
224
225**Since**: 11
226
227
228**Parameters**
229
230| Name| Description|
231| -- | -- |
232| [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) *compilation | Pointer to the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance.|
233| const void *buffer | Pointer to the given memory.|
234| size_t length | Memory length. |
235| size_t *modelSize | Size of the model cache, in bytes.|
236
237**Returns**
238
239| Type| Description|
240| -- | -- |
241| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
242
243### OH_NNCompilation_ImportCacheFromBuffer()
244
245```
246OH_NN_ReturnCode OH_NNCompilation_ImportCacheFromBuffer(OH_NNCompilation *compilation,const void *buffer,size_t modelSize)
247```
248
249**Description**
250
251Reads the model cache from the specified buffer.
252
253For details about the model cache, see [OH_NNCompilation_SetCache](capi-neural-network-core-h.md#oh_nncompilation_setcache).
254
255After calling [OH_NNCompilation_ImportCacheFromBuffer](capi-neural-network-core-h.md#oh_nncompilation_importcachefrombuffer), call [OH_NNCompilation_Build](capi-neural-network-core-h.md#oh_nncompilation_build) to complete model recovery.
256
257The **compilation** instance only stores the **buffer** pointer, but does not copy its data. You cannot release the **buffer** before the **compilation** instance is destroyed.
258
259**Since**: 11
260
261
262**Parameters**
263
264| Name| Description|
265| -- | -- |
266| [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) *compilation | Pointer to the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance.|
267| const void *buffer | Pointer to the given memory.|
268| size_t modelSize | Size of the model cache, in bytes.|
269
270**Returns**
271
272| Type| Description|
273| -- | -- |
274| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
275
276### OH_NNCompilation_AddExtensionConfig()
277
278```
279OH_NN_ReturnCode OH_NNCompilation_AddExtensionConfig(OH_NNCompilation *compilation,const char *configName,const void *configValue,const size_t configValueSize)
280```
281
282**Description**
283
284Adds extended configurations for custom device attributes.
285
286Some devices have their own attributes, which have not been enabled in NNRt. This API helps you to set custom attributes for these devices.
287
288You need to obtain their names and values from the device vendor's documentation and add them to the model compilation instance. These attributes are passed directly to the device driver. If the device driver cannot parse the attributes, an error is returned.<br>After [OH_NNCompilation_Build](capi-neural-network-core-h.md#oh_nncompilation_build) is called, **configName** and **configValue** can be released.
289
290**Since**: 11
291
292
293**Parameters**
294
295| Name| Description|
296| -- | -- |
297| [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) *compilation | Pointer to the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance.|
298| const char *configName | Configuration name.|
299| const void *configValue | Configured value.|
300| const size_t configValueSize | Size of the configured value, in bytes.|
301
302**Returns**
303
304| Type| Description|
305| -- | -- |
306| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
307
308### OH_NNCompilation_SetDevice()
309
310```
311OH_NN_ReturnCode OH_NNCompilation_SetDevice(OH_NNCompilation *compilation, size_t deviceID)
312```
313
314**Description**
315
316Sets the device for model compilation and computing.
317
318In the compilation phase, you need to specify the device for model compilation and computing. Call [OH_NNDevice_GetAllDevicesID](capi-neural-network-core-h.md#oh_nndevice_getalldevicesid) to obtain available device IDs. Then, call [OH_NNDevice_GetType](capi-neural-network-core-h.md#oh_nndevice_gettype) and [OH_NNDevice_GetType](capi-neural-network-core-h.md#oh_nndevice_gettype) to obtain device information and pass target device IDs to this API.
319
320**Since**: 9
321
322
323**Parameters**
324
325| Name| Description|
326| -- | -- |
327| [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) *compilation | Pointer to the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance.|
328| size_t deviceID | Device ID. If the value is **0**, the first device in the current device list is used by default.|
329
330**Returns**
331
332| Type| Description|
333| -- | -- |
334| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
335
336### OH_NNCompilation_SetCache()
337
338```
339OH_NN_ReturnCode OH_NNCompilation_SetCache(OH_NNCompilation *compilation, const char *cachePath, uint32_t version)
340```
341
342**Description**
343
344Sets the cache directory and version for model compilation.
345
346On the device that supports model caching, a model can be saved as a cache file after a successful compilation at the device driver layer. The model can be directly read from the cache file, saving the compilation time.
347
348This API performs different operations based on the model cache directory and version:
349
350- If no file exists in the specified model cache directory, cache the built model to the directory and set the cache version to the value of **version**.
351- If a complete cached file exists in the specified model cache directory, and its version number is equal to **version**, read the cached file in the directory and pass it to the underlying device to convert it into an executable model instance.
352- If a complete cached file exists in the specified model cache directory, but its version is earlier than **version**, update the cached file. After the model is built on the underlying device, the cached file in the cache directory is overwritten and the version is updated to **version**.
353- If a complete cached file exists in the specified model cache directory, but its version is later than **version**, the cached file is not read and the error code [OH_NN_INVALID_PARAMETER](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned.
354- If the cached file in the specified model cache directory is incomplete or you do not have the file access permission, the error code [OH_NN_INVALID_FILE](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned.
355- If the model cache directory does not exist or you do not have the file access permission, the error code [OH_NN_INVALID_PATH](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned.
356
357**Since**: 9
358
359
360**Parameters**
361
362| Name| Description|
363| -- | -- |
364| [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) *compilation | Pointer to the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance.|
365| const char *cachePath | Directory for storing model cache files. This API creates model cache directories for different devices in the cachePath directory. You are advised to use a separate cache directory for each model.|
366| uint32_t version | Cached model version.|
367
368**Returns**
369
370| Type| Description|
371| -- | -- |
372| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
373
374### OH_NNCompilation_SetPerformanceMode()
375
376```
377OH_NN_ReturnCode OH_NNCompilation_SetPerformanceMode(OH_NNCompilation *compilation,OH_NN_PerformanceMode performanceMode)
378```
379
380**Description**
381
382Sets the performance mode for model computing.
383
384NNRt allows you to set the performance mode for model computing to meet the requirements of low power consumption and ultimate performance. If this API is not called to set the performance mode in the compilation phase, the model compilation instance assigns the [OH_NN_PERFORMANCE_NONE](capi-neural-network-runtime-type-h.md#oh_nn_performancemode) mode for the model by default. In this case, the device performs computing in the default performance mode. If this API is called on a device that does not support setting of the performance mode, the error code [OH_NN_UNAVAILABLE_DEVICE](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned.
385
386**Since**: 9
387
388
389**Parameters**
390
391| Name| Description|
392| -- | -- |
393| [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) *compilation | Pointer to the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance.|
394| [OH_NN_PerformanceMode](capi-neural-network-runtime-type-h.md#oh_nn_performancemode) performanceMode | Performance mode for model computing. For details, see [OH_NN_PerformanceMode](capi-neural-network-runtime-type-h.md#oh_nn_performancemode).|
395
396**Returns**
397
398| Type| Description|
399| -- | -- |
400| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
401
402### OH_NNCompilation_SetPriority()
403
404```
405OH_NN_ReturnCode OH_NNCompilation_SetPriority(OH_NNCompilation *compilation, OH_NN_Priority priority)
406```
407
408**Description**
409
410Sets the priority for model computing.
411
412NNRt allows you to set computing priorities for models. The priorities apply only to models created by the process with the same UID. The settings will not affect models created by processes with different UIDs on different devices. If this API is called on a device that does not support priority setting, the error code [OH_NN_UNAVAILABLE_DEVICE](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned.
413
414**Since**: 9
415
416
417**Parameters**
418
419| Name| Description|
420| -- | -- |
421| [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) *compilation | Pointer to the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance.|
422| [OH_NN_Priority](capi-neural-network-runtime-type-h.md#oh_nn_priority) priority | Sets the priority. For details about the available priorities, see [OH_NN_Priority](capi-neural-network-runtime-type-h.md#oh_nn_priority).|
423
424**Returns**
425
426| Type| Description|
427| -- | -- |
428| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
429
430### OH_NNCompilation_EnableFloat16()
431
432```
433OH_NN_ReturnCode OH_NNCompilation_EnableFloat16(OH_NNCompilation *compilation, bool enableFloat16)
434```
435
436**Description**
437
438Enables float16 for computing.
439
440By default, the floating-point model uses float32 for computing. If this API is called on a device that supports float16, floating-point model that supports float32 will use float16 for computing, so to reduce memory usage and execution time. This option is invalid for fixed-point models, for example, fixed-point models of the int8 type.<br>If this API is called on a device that does not support float16, the error code [OH_NN_UNAVAILABLE_DEVICE](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned.
441
442**Since**: 9
443
444
445**Parameters**
446
447| Name| Description|
448| -- | -- |
449| [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) *compilation | Pointer to the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance.|
450| bool enableFloat16 | Whether to enable float16. If this parameter is set to **true**, float16 inference is performed. If this parameter is set to **false**, float32 inference is performed.|
451
452**Returns**
453
454| Type| Description|
455| -- | -- |
456| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
457
458### OH_NNCompilation_Build()
459
460```
461OH_NN_ReturnCode OH_NNCompilation_Build(OH_NNCompilation *compilation)
462```
463
464**Description**
465
466Performs model compilation.
467
468After the compilation configuration is complete, call this API to start model compilation. The model compilation instance pushes the model and compilation options to the device for compilation.
469
470After this API is called, additional compilation operations cannot be performed. If [OH_NNCompilation_SetDevice](capi-neural-network-core-h.md#oh_nncompilation_setdevice), [OH_NNCompilation_SetCache](capi-neural-network-core-h.md#oh_nncompilation_setcache), [OH_NNCompilation_SetPerformanceMode](capi-neural-network-core-h.md#oh_nncompilation_setperformancemode), [OH_NNCompilation_SetPriority](capi-neural-network-core-h.md#oh_nncompilation_setpriority), or [OH_NNCompilation_EnableFloat16](capi-neural-network-core-h.md#oh_nncompilation_enablefloat16) is called, [OH_NN_OPERATION_FORBIDDEN](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned.
471
472**Since**: 9
473
474
475**Parameters**
476
477| Name| Description|
478| -- | -- |
479| [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) *compilation | Pointer to the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance.|
480
481**Returns**
482
483| Type| Description|
484| -- | -- |
485| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
486
487### OH_NNCompilation_Destroy()
488
489```
490void OH_NNCompilation_Destroy(OH_NNCompilation **compilation)
491```
492
493**Description**
494
495Destroys an [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance.
496
497This API needs to be used to destroy the model compilation instances created by calling [OH_NNCompilation_Construct](capi-neural-network-core-h.md#oh_nncompilation_construct), [OH_NNCompilation_ConstructWithOfflineModelFile](capi-neural-network-core-h.md#oh_nncompilation_constructwithofflinemodelfile), [OH_NNCompilation_ConstructWithOfflineModelBuffer](capi-neural-network-core-h.md#oh_nncompilation_constructwithofflinemodelbuffer) or [OH_NNCompilation_ConstructForCache](capi-neural-network-core-h.md#oh_nncompilation_constructforcache). If **compilation** or **\*compilation** is a null pointer, this API only prints warning logs but does not perform the destruction operation.
498
499**Since**: 9
500
501
502**Parameters**
503
504| Name| Description|
505| -- | -- |
506| [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) **compilation | Level-2 pointer to the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance. After the model building instance is destroyed, this API sets **\*compilation** to a null pointer.|
507
508### OH_NNTensorDesc_Create()
509
510```
511NN_TensorDesc *OH_NNTensorDesc_Create()
512```
513
514**Description**
515
516Creates an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
517
518[NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) describes various tensor attributes, such as the name, data type, shape, and format.
519
520You can create an [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance based on the input [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance by calling the following APIs:
521
522[OH_NNTensor_Create](capi-neural-network-core-h.md#oh_nntensor_create)
523
524[OH_NNTensor_CreateWithSize](capi-neural-network-core-h.md#oh_nntensor_createwithsize)
525
526[OH_NNTensor_CreateWithFd](capi-neural-network-core-h.md#oh_nntensor_createwithfd)
527
528This API copies the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance to [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md). This way, you can create multiple [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instances with the same [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance. If the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance is no longer needed, call [OH_NNTensorDesc_Destroy](capi-neural-network-core-h.md#oh_nntensordesc_destroy) to destroy it.
529
530**Since**: 11
531
532**Returns**
533
534| Type| Description|
535| -- | -- |
536| [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) * | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance. If the operation fails, **NULL** is returned.|
537
538### OH_NNTensorDesc_Destroy()
539
540```
541OH_NN_ReturnCode OH_NNTensorDesc_Destroy(NN_TensorDesc **tensorDesc)
542```
543
544**Description**
545
546Destroys an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
547
548If the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance is no longer needed, call this API to destroy it. Otherwise, a memory leak occurs.
549
550If <b>tensorDesc</b> or <b>\*tensorDesc</b> is a null pointer, an error is returned but the object will not be destroyed.
551
552**Since**: 11
553
554
555**Parameters**
556
557| Name| Description|
558| -- | -- |
559| [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) **tensorDesc | Level-2 pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.|
560
561**Returns**
562
563| Type| Description|
564| -- | -- |
565| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
566
567### OH_NNTensorDesc_SetName()
568
569```
570OH_NN_ReturnCode OH_NNTensorDesc_SetName(NN_TensorDesc *tensorDesc, const char *name)
571```
572
573**Description**
574
575Sets the name of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
576
577After an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance is created, call this API to set the tensor name. The value of \*name is a C-style string ending with \0.
578
579If <b>tensorDesc</b> or <b>name</b> is a null pointer, an error is returned.
580
581**Since**: 11
582
583
584**Parameters**
585
586| Name| Description|
587| -- | -- |
588| [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) *tensorDesc | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.|
589| const char *name | Name of the tensor to be set.|
590
591**Returns**
592
593| Type| Description|
594| -- | -- |
595| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
596
597### OH_NNTensorDesc_GetName()
598
599```
600OH_NN_ReturnCode OH_NNTensorDesc_GetName(const NN_TensorDesc *tensorDesc, const char **name)
601```
602
603**Description**
604
605Obtains the name of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
606
607You can use this API to obtain the name of the specified [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance. The value of **\*name** is a C-style string ending with **\0**.
608
609If <b>tensorDesc</b> or <b>name</b> is a null pointer, an error is returned. As an output parameter, **\*name** must be a null pointer. Otherwise, an error is returned.
610
611For example, you should define **char\* tensorName = NULL** and pass **&tensorName** as a parameter of **name**.
612
613You do not need to release the memory of **name**. When **tensorDesc** is destroyed, it is automatically released.
614
615**Since**: 11
616
617**Parameters**
618
619| Name| Description|
620| -- | -- |
621| [const NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) *tensorDesc | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.|
622| const char **name | Tensor name.|
623
624**Returns**
625
626| Type| Description|
627| -- | -- |
628| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
629
630### OH_NNTensorDesc_SetDataType()
631
632```
633OH_NN_ReturnCode OH_NNTensorDesc_SetDataType(NN_TensorDesc *tensorDesc, OH_NN_DataType dataType)
634```
635
636**Description**
637
638Sets the data type of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
639
640After an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance is created, call this API to set the tensor data type.
641
642If <b>tensorDesc</b> is a null pointer, an error is returned.
643
644**Since**: 11
645
646
647**Parameters**
648
649| Name| Description|
650| -- | -- |
651| [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) *tensorDesc | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.|
652| [OH_NN_DataType](capi-neural-network-runtime-type-h.md#oh_nn_datatype) dataType | Tensor data type to be set.|
653
654**Returns**
655
656| Type| Description|
657| -- | -- |
658| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
659
660### OH_NNTensorDesc_GetDataType()
661
662```
663OH_NN_ReturnCode OH_NNTensorDesc_GetDataType(const NN_TensorDesc *tensorDesc, OH_NN_DataType *dataType)
664```
665
666**Description**
667
668Obtains the data type of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
669
670You can use this API to obtain the data type of the specified [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
671
672If <b>tensorDesc</b> or <b>dataType</b> is a null pointer, an error is returned.
673
674**Since**: 11
675
676
677**Parameters**
678
679| Name| Description|
680| -- | -- |
681| [const NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) *tensorDesc | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.|
682| [OH_NN_DataType](capi-neural-network-runtime-type-h.md#oh_nn_datatype) *dataType | Tensor data type.|
683
684**Returns**
685
686| Type| Description|
687| -- | -- |
688| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
689
690### OH_NNTensorDesc_SetShape()
691
692```
693OH_NN_ReturnCode OH_NNTensorDesc_SetShape(NN_TensorDesc *tensorDesc, const int32_t *shape, size_t shapeLength)
694```
695
696**Description**
697
698Sets the data shape of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
699
700After an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance is created, call this API to set the tensor shape.
701
702If <b>tensorDesc</b> or <b>shape</b> is a null pointer, or <b>shapeLength</b> is 0, an error is returned.
703
704**Since**: 11
705
706
707**Parameters**
708
709| Name| Description|
710| -- | -- |
711| [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) *tensorDesc | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.|
712| const int32_t *shape | List of tensor shapes to be set.|
713| size_t shapeLength | Length of the list of tensor shapes.|
714
715**Returns**
716
717| Type| Description|
718| -- | -- |
719| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
720
721### OH_NNTensorDesc_GetShape()
722
723```
724OH_NN_ReturnCode OH_NNTensorDesc_GetShape(const NN_TensorDesc *tensorDesc, int32_t **shape, size_t *shapeLength)
725```
726
727**Description**
728
729Obtains the shape of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
730
731You can use this API to obtain the shape of the specified [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
732
733If <b>tensorDesc</b>, <b>shape</b>, or <b>shapeLength</b> is a null pointer, an error is returned. As an output parameter, **\*shape** must be a null pointer. Otherwise, an error is returned.
734
735For example, you should define **int32_t\* tensorShape = NULL** and pass **&tensorShape** as a parameter of **shape**.
736
737You do not need to release the memory of **shape**. When **tensorDesc** is destroyed, it is automatically released.
738
739**Since**: 11
740
741
742**Parameters**
743
744| Name| Description|
745| -- | -- |
746| [const NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) *tensorDesc | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.|
747| int32_t **shape | List of tensor shapes.|
748| size_t *shapeLength | Length of the list of tensor shapes.|
749
750**Returns**
751
752| Type| Description|
753| -- | -- |
754| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
755
756### OH_NNTensorDesc_SetFormat()
757
758```
759OH_NN_ReturnCode OH_NNTensorDesc_SetFormat(NN_TensorDesc *tensorDesc, OH_NN_Format format)
760```
761
762**Description**
763
764Sets the data format of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
765
766After an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance is created, call this API to set [OH_NN_Format](capi-neural-network-runtime-type-h.md#oh_nn_format) of the tensor.
767
768If <b>tensorDesc</b> is a null pointer, an error is returned.
769
770**Since**: 11
771
772
773**Parameters**
774
775| Name| Description|
776| -- | -- |
777| [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) *tensorDesc | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.|
778| [OH_NN_Format](capi-neural-network-runtime-type-h.md#oh_nn_format) format | Tensor data format to be set.|
779
780**Returns**
781
782| Type| Description|
783| -- | -- |
784| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
785
786### OH_NNTensorDesc_GetFormat()
787
788```
789OH_NN_ReturnCode OH_NNTensorDesc_GetFormat(const NN_TensorDesc *tensorDesc, OH_NN_Format *format)
790```
791
792**Description**
793
794Obtains the data format of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
795
796You can use this API to obtain [OH_NN_Format](capi-neural-network-runtime-type-h.md#oh_nn_format) of the specified [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
797
798If <b>tensorDesc</b> or <b>format</b> is a null pointer, an error is returned.
799
800**Since**: 11
801
802
803**Parameters**
804
805| Name| Description|
806| -- | -- |
807| [const NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) *tensorDesc | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.|
808| [OH_NN_Format](capi-neural-network-runtime-type-h.md#oh_nn_format) *format | Tensor data format.|
809
810**Returns**
811
812| Type| Description|
813| -- | -- |
814| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
815
816### OH_NNTensorDesc_GetElementCount()
817
818```
819OH_NN_ReturnCode OH_NNTensorDesc_GetElementCount(const NN_TensorDesc *tensorDesc, size_t *elementCount)
820```
821
822**Description**
823
824Obtains the number of elements in an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
825
826You can use this API to obtain the number of elements in the specified [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance. To obtain the size of tensor data, call [OH_NNTensorDesc_GetByteSize](capi-neural-network-core-h.md#oh_nntensordesc_getbytesize).
827
828If the tensor shape is dynamically variable, this API returns an error code and **elementCount** is **0**.
829
830If <b>tensorDesc</b> or <b>elementCount</b> is a null pointer, an error is returned.
831
832**Since**: 11
833
834
835**Parameters**
836
837| Name| Description|
838| -- | -- |
839| [const NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) *tensorDesc | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.|
840| size_t *elementCount | Number of elements.|
841
842**Returns**
843
844| Type| Description|
845| -- | -- |
846| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
847
848### OH_NNTensorDesc_GetByteSize()
849
850```
851OH_NN_ReturnCode OH_NNTensorDesc_GetByteSize(const NN_TensorDesc *tensorDesc, size_t *byteSize)
852```
853
854**Description**
855
856Obtains the number of bytes occupied by the tensor data obtained through calculation based on the shape and data type of an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
857
858You can use this API to obtain the number of bytes occupied by the tensor data obtained through calculation based on the shape and data type of the specified [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
859
860If the tensor shape is dynamically variable, this API returns an error code and **byteSize** is **0**.
861
862To obtain the number of elements in the tensor data, call [OH_NNTensorDesc_GetElementCount](capi-neural-network-core-h.md#oh_nntensordesc_getelementcount).
863
864If <b>tensorDesc</b> or <b>byteSize</b> is a null pointer, an error is returned.
865
866**Since**: 11
867
868
869**Parameters**
870
871| Name| Description|
872| -- | -- |
873| [const NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) *tensorDesc | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.|
874| size_t *byteSize | Size of the returned data, in bytes.|
875
876**Returns**
877
878| Type| Description|
879| -- | -- |
880| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
881
882### OH_NNTensor_Create()
883
884```
885NN_Tensor *OH_NNTensor_Create(size_t deviceID, NN_TensorDesc *tensorDesc)
886```
887
888**Description**
889
890Creates an [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance from [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md).
891
892This API uses [OH_NNTensorDesc_GetByteSize](capi-neural-network-core-h.md#oh_nntensordesc_getbytesize) to calculate the number of bytes of tensor data and allocate device memory for it. The device driver directly obtains tensor data in zero-copy mode.
893
894This API copies **tensorDesc** to [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md). Therefore, if **tensorDesc** is no longer needed, destroy it by calling [OH_NNTensorDesc_Destroy](capi-neural-network-core-h.md#oh_nntensordesc_destroy).
895
896If the tensor shape is dynamic, an error is returned.
897
898<b>deviceID</b> indicates the selected device. If the value is **0**, the first device in the current device list is used by default.
899
900**tensorDesc is mandatory**. If it is a null pointer, an error is returned.
901
902If the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance is no longer needed, destroy it by calling [OH_NNTensor_Destroy](capi-neural-network-core-h.md#oh_nntensor_destroy).
903
904**Since**: 11
905
906
907**Parameters**
908
909| Name| Description|
910| -- | -- |
911| size_t deviceID | Device ID. If the value is **0**, the first device in the current device list is used by default.|
912| [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) *tensorDesc | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.|
913
914**Returns**
915
916| Type| Description|
917| -- | -- |
918| [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) * | Pointer to the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance. If the operation fails, **NULL** is returned.|
919
920### OH_NNTensor_CreateWithSize()
921
922```
923NN_Tensor *OH_NNTensor_CreateWithSize(size_t deviceID, NN_TensorDesc *tensorDesc, size_t size)
924```
925
926**Description**
927
928Creates an [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance based on the specified memory size and [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
929
930This API uses **size** as the number of bytes of tensor data and allocates device memory to it. The device driver directly obtains tensor data in zero-copy mode.
931
932Note that this API copies **tensorDesc** to [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md). Therefore, if **tensorDesc** is no longer needed, destroy it by calling [OH_NNTensorDesc_Destroy](capi-neural-network-core-h.md#oh_nntensordesc_destroy).
933
934**deviceID** indicates the ID of the selected device. If the value is **0**, the first device is used.
935
936**tensorDesc** is mandatory. If it is a null pointer, an error is returned.
937
938The value of **size** must be greater than or equal to the number of bytes occupied by **tensorDesc**, which can be obtained by calling [OH_NNTensorDesc_GetByteSize](capi-neural-network-core-h.md#oh_nntensordesc_getbytesize). Otherwise, an error is returned. If the tensor shape is dynamic, <b>size</b> is not checked.
939
940If the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance is no longer needed, destroy it by calling [OH_NNTensor_Destroy](capi-neural-network-core-h.md#oh_nntensor_destroy).
941
942**Since**: 11
943
944
945**Parameters**
946
947| Name| Description|
948| -- | -- |
949| size_t deviceID | Device ID. If the value is **0**, the first device in the current device list is used by default.|
950| [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) *tensorDesc | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.|
951| size_t size | Size of the tensor data to be allocated.|
952
953**Returns**
954
955| Type| Description|
956| -- | -- |
957| [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) * | Pointer to the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance. If the operation fails, **NULL** is returned.|
958
959### OH_NNTensor_CreateWithFd()
960
961```
962NN_Tensor *OH_NNTensor_CreateWithFd(size_t deviceID,NN_TensorDesc *tensorDesc,int fd,size_t size,size_t offset)
963```
964
965**Description**
966
967Creates an [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance based on the specified file descriptor of the shared memory and [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.
968
969This API reuses the shared memory corresponding to **fd**, which may source from another [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance. When the tensor created by calling [OH_NNTensor_Destroy](capi-neural-network-core-h.md#oh_nntensor_destroy) is destroyed, the memory storing the tensor data is not released.
970
971Note that this API copies **tensorDesc** to [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md). Therefore, if **tensorDesc** is no longer needed, destroy it by calling [OH_NNTensorDesc_Destroy](capi-neural-network-core-h.md#oh_nntensordesc_destroy).<br><b>deviceID</b> indicates the selected device. If the value is **0**, the first device in the current device list is used by default.
972
973**tensorDesc** is mandatory. If the pointer is null, an error is returned.
974
975If the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance is no longer needed, destroy it by calling [OH_NNTensor_Destroy](capi-neural-network-core-h.md#oh_nntensor_destroy).
976
977**Since**: 11
978
979
980**Parameters**
981
982| Name| Description|
983| -- | -- |
984| size_t deviceID | Device ID. If the value is **0**, the first device in the current device list is used by default.|
985| [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) *tensorDesc | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance.|
986| int fd | **fd** of the shared memory to be used.|
987| size_t size | Size of the shared memory to be used.|
988| size_t offset | Offset of the shared memory to be used.|
989
990**Returns**
991
992| Type| Description|
993| -- | -- |
994| [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) * | Pointer to the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance. If the operation fails, **NULL** is returned.|
995
996### OH_NNTensor_Destroy()
997
998```
999OH_NN_ReturnCode OH_NNTensor_Destroy(NN_Tensor **tensor)
1000```
1001
1002**Description**
1003
1004Destroys an [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance.
1005
1006If the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance is no longer needed, call this API to destroy it. Otherwise, a memory leak occurs.
1007
1008If <b>tensor</b> or <b>*tensor</b> is a null pointer, an error is returned but the object will not be destroyed.
1009
1010**Since**: 11
1011
1012**Parameters**
1013
1014| Name| Description|
1015| -- | -- |
1016| [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) **tensor | Level-2 pointer to the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance.|
1017
1018**Returns**
1019
1020| Type| Description|
1021| -- | -- |
1022| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
1023
1024### OH_NNTensor_GetTensorDesc()
1025
1026```
1027NN_TensorDesc *OH_NNTensor_GetTensorDesc(const NN_Tensor *tensor)
1028```
1029
1030**Description**
1031
1032Obtains an [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance of [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md).
1033
1034You can use this API to obtain the pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance of the specified [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance.
1035
1036You can obtain tensor attributes of various types from the returned [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance, such as the name, data format, data type, and shape.
1037
1038You should not destroy the returned [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance because it points to an internal instance of [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md). Otherwise, once [OH_NNTensor_Destroy](capi-neural-network-core-h.md#oh_nntensor_destroy) is called, a crash may occur due to double memory release.
1039
1040If <b>Tensor</b> is a null pointer, a null pointer is returned.
1041
1042**Since**: 11
1043
1044
1045**Parameters**
1046
1047| Name| Description|
1048| -- | -- |
1049| [const NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) *tensor | Pointer to the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance.|
1050
1051**Returns**
1052
1053| Type| Description|
1054| -- | -- |
1055| [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) * | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance. If the operation fails, **NULL** is returned.|
1056
1057### OH_NNTensor_GetDataBuffer()
1058
1059```
1060void *OH_NNTensor_GetDataBuffer(const NN_Tensor *tensor)
1061```
1062
1063**Description**
1064
1065Obtains the memory address of [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) data.
1066
1067You can read/write data from/to tensor data memory. The data memory is mapped from the shared memory on the device. Therefore, the device driver can directly obtain tensor data in zero-copy mode.
1068
1069Only tensor data in the [offset, size) segment in the corresponding shared memory can be used. **offset** indicates the offset in the shared memory and can be obtained by calling [OH_NNTensor_GetOffset](capi-neural-network-core-h.md#oh_nntensor_getoffset). **size** indicates the total size of the shared memory, which can be obtained by calling [OH_NNTensor_GetSize](capi-neural-network-core-h.md#oh_nntensor_getsize).
1070
1071If <b>Tensor</b> is a null pointer, a null pointer is returned.
1072
1073**Since**: 11
1074
1075
1076**Parameters**
1077
1078| Name| Description|
1079| -- | -- |
1080| [const NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) *tensor | Pointer to the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance.|
1081
1082**Returns**
1083
1084| Type| Description|
1085| -- | -- |
1086| void * | Pointer to the tensor data memory. If the operation fails, a null pointer is returned.|
1087
1088### OH_NNTensor_GetFd()
1089
1090```
1091OH_NN_ReturnCode OH_NNTensor_GetFd(const NN_Tensor *tensor, int *fd)
1092```
1093
1094**Description**
1095
1096Obtains the file descriptor (**fd**) of the shared memory where [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) data is stored.
1097
1098**fd** corresponds to a device shared memory and can be used by another [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) through [OH_NNTensor_CreateWithFd](capi-neural-network-core-h.md#oh_nntensor_createwithfd).
1099
1100If <b>tensor</b> or <b>fd</b> is a null pointer, an error is returned.
1101
1102**Since**: 11
1103
1104
1105**Parameters**
1106
1107| Name| Description|
1108| -- | -- |
1109| [const NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) *tensor | Pointer to the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance.|
1110| int *fd | **fd** of the shared memory.|
1111
1112**Returns**
1113
1114| Type| Description|
1115| -- | -- |
1116| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
1117
1118### OH_NNTensor_GetSize()
1119
1120```
1121OH_NN_ReturnCode OH_NNTensor_GetSize(const NN_Tensor *tensor, size_t *size)
1122```
1123
1124**Description**
1125
1126Obtains the size of the shared memory where the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) data is stored.
1127
1128The value of **size** is the same as that of [OH_NNTensor_CreateWithSize](capi-neural-network-core-h.md#oh_nntensor_createwithsize) and [OH_NNTensor_CreateWithFd](capi-neural-network-core-h.md#oh_nntensor_createwithfd). However, for a tensor created by using [OH_NNTensor_Create](capi-neural-network-core-h.md#oh_nntensor_create), the value of **size** is equal to the number of bytes actually occupied by the tensor data, which can be obtained by calling [OH_NNTensorDesc_GetByteSize](capi-neural-network-core-h.md#oh_nntensordesc_getbytesize).
1129
1130 Only tensor data in the [offset, size) segment in the shared memory corresponding to the **fd** can be used. **offset** indicates the offset in the shared memory and can be obtained by calling [OH_NNTensor_GetOffset](capi-neural-network-core-h.md#oh_nntensor_getoffset). **size** indicates the total size of the shared memory, which can be obtained by calling [OH_NNTensor_GetSize](capi-neural-network-core-h.md#oh_nntensor_getsize).
1131
1132If <b>tensor</b> or <b>size</b> is a null pointer, an error is returned.
1133
1134**Since**: 11
1135
1136
1137**Parameters**
1138
1139| Name| Description|
1140| -- | -- |
1141| [const NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) *tensor | Pointer to the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance.|
1142| size_t *size | Size of the shared memory where the returned data is located.|
1143
1144**Returns**
1145
1146| Type| Description|
1147| -- | -- |
1148| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
1149
1150### OH_NNTensor_GetOffset()
1151
1152```
1153OH_NN_ReturnCode OH_NNTensor_GetOffset(const NN_Tensor *tensor, size_t *offset)
1154```
1155
1156**Description**
1157
1158Obtains the offset of [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) data in the shared memory.
1159
1160**offset** indicates the offset of tensor data in the corresponding shared memory. It can be used by another [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) together with **fd** and **size** of the shared memory through [OH_NNTensor_CreateWithFd](capi-neural-network-core-h.md#oh_nntensor_createwithfd).
1161
1162 Only tensor data in the [offset, size) segment in the shared memory corresponding to the **fd** can be used. **offset** indicates the offset in the shared memory and can be obtained by calling [OH_NNTensor_GetOffset](capi-neural-network-core-h.md#oh_nntensor_getoffset). **size** indicates the total size of the shared memory, which can be obtained by calling [OH_NNTensor_GetSize](capi-neural-network-core-h.md#oh_nntensor_getsize).
1163
1164If <b>tensor</b> or <b>offset</b> is a null pointer, an error is returned.
1165
1166**Since**: 11
1167
1168**Parameters**
1169
1170| Name| Description|
1171| -- | -- |
1172| [const NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) *tensor | Pointer to the [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) instance.|
1173| size_t *offset | Offset for the fd of the tensor.|
1174
1175**Returns**
1176
1177| Type| Description|
1178| -- | -- |
1179| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
1180
1181### OH_NNExecutor_Construct()
1182
1183```
1184OH_NNExecutor *OH_NNExecutor_Construct(OH_NNCompilation *compilation)
1185```
1186
1187**Description**
1188
1189Creates an [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) executor instance.
1190
1191This API constructs a model inference executor for a device based on the specified [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance. Use [OH_NNExecutor_SetInput](capi-neural-network-runtime-h.md#oh_nnexecutor_setinput) to set the model input data. After the input data is set, call [OH_NNExecutor_Run](capi-neural-network-runtime-h.md#oh_nnexecutor_run) to perform inference and then call [OH_NNExecutor_SetOutput](capi-neural-network-runtime-h.md#oh_nnexecutor_setoutput) to obtain the computing result.  After an [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) instance is created through the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance, destroy the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance if it is no longer needed.
1192
1193**Since**: 9
1194
1195**Parameters**
1196
1197| Name| Description|
1198| -- | -- |
1199| [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) *compilation | Pointer to the [OH_NNCompilation](capi-neuralnetworkruntime-oh-nncompilation.md) instance.|
1200
1201**Returns**
1202
1203| Type| Description|
1204| -- | -- |
1205| [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) * | Pointer to the [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) instance. If the operation fails, NULL is returned.|
1206
1207### OH_NNExecutor_GetOutputShape()
1208
1209```
1210OH_NN_ReturnCode OH_NNExecutor_GetOutputShape(OH_NNExecutor *executor,uint32_t outputIndex,int32_t **shape,uint32_t *shapeLength)
1211```
1212
1213**Description**
1214
1215Obtains the dimension information about the output tensor.
1216
1217You can use this API to obtain information about the specified output dimension and number of dimensions after a single inference is performed by calling [OH_NNExecutor_Run](capi-neural-network-runtime-h.md#oh_nnexecutor_run). It is commonly used in dynamic shape input and output scenarios.
1218
1219If the value of **outputIndex** reaches or exceeds the number of output tensors, an error is returned. You can obtain the number of output tensors by calling [OH_NNExecutor_GetOutputCount](capi-neural-network-core-h.md#oh_nnexecutor_getoutputcount).
1220
1221As an output parameter, **\*shape** cannot be a null pointer. Otherwise, an error is returned. For example, you should define **int32_t\* tensorShape = NULL** and pass **&tensorShape** as an input parameter.
1222
1223You do not need to release the memory of **shape**. It is released with **executor**.
1224
1225**Since**: 9
1226
1227**Parameters**
1228
1229| Name| Description|
1230| -- | -- |
1231| [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) *executor | Pointer to the [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) instance.|
1232| uint32_t outputIndex | Output index value, which is the same as the sequence of the output data when [OH_NNModel_SpecifyInputsAndOutputs](capi-neural-network-runtime-h.md#oh_nnmodel_specifyinputsandoutputs) is called. Assume that **outputIndices** is {4, 6, 8} when [OH_NNModel_SpecifyInputsAndOutputs](capi-neural-network-runtime-h.md#oh_nnmodel_specifyinputsandoutputs) is called. When you obtain information about the output dimension, set the value of this parameter to {0, 1, 2}.|
1233| int32_t **shape | Pointer to the int32_t array. The value of each element in the array is the length of the output tensor in each dimension.|
1234| uint32_t *shapeLength | Pointer to the uint32_t type. The number of output dimensions is returned.|
1235
1236**Returns**
1237
1238| Type| Description|
1239| -- | -- |
1240| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
1241
1242### OH_NNExecutor_Destroy()
1243
1244```
1245void OH_NNExecutor_Destroy(OH_NNExecutor **executor)
1246```
1247
1248**Description**
1249
1250Destroys an executor instance to release the memory occupied by it.
1251
1252This API needs to be called to release the executor instance created by calling [OH_NNExecutor_Construct](capi-neural-network-core-h.md#oh_nnexecutor_construct). Otherwise, a memory leak occurs.  If **executor** or **\*executor** is a null pointer, this API only prints the warning log and does not execute the release logic.
1253
1254**Since**: 9
1255
1256
1257**Parameters**
1258
1259| Name| Description|
1260| -- | -- |
1261| [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) **executor | Level-2 pointer to the [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) instance.|
1262
1263### OH_NNExecutor_GetInputCount()
1264
1265```
1266OH_NN_ReturnCode OH_NNExecutor_GetInputCount(const OH_NNExecutor *executor, size_t *inputCount)
1267```
1268
1269**Description**
1270
1271Obtains the number of input tensors.
1272
1273You can obtain the number of input tensors from **executor**, and then use [OH_NNExecutor_CreateInputTensorDesc](capi-neural-network-core-h.md#oh_nnexecutor_createinputtensordesc) to create a tensor description based on the specified tensor index.
1274
1275**Since**: 11
1276
1277
1278**Parameters**
1279
1280| Name| Description|
1281| -- | -- |
1282| [const OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) *executor | Pointer to the [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) instance.|
1283| size_t *inputCount | Number of returned input tensors.|
1284
1285**Returns**
1286
1287| Type| Description|
1288| -- | -- |
1289| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
1290
1291### OH_NNExecutor_GetOutputCount()
1292
1293```
1294OH_NN_ReturnCode OH_NNExecutor_GetOutputCount(const OH_NNExecutor *executor, size_t *outputCount)
1295```
1296
1297**Description**
1298
1299Obtains the number of output tensors.
1300
1301You can obtain the number of output tensors from **executor**, and then use [OH_NNExecutor_CreateOutputTensorDesc](capi-neural-network-core-h.md#oh_nnexecutor_createoutputtensordesc) to create a tensor description based on the specified tensor index.
1302
1303**Since**: 11
1304
1305
1306**Parameters**
1307
1308| Name| Description|
1309| -- | -- |
1310| [const OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) *executor | Pointer to the [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) instance.|
1311| size_t *outputCount | Number of returned output tensors.|
1312
1313**Returns**
1314
1315| Type| Description|
1316| -- | -- |
1317| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
1318
1319### OH_NNExecutor_CreateInputTensorDesc()
1320
1321```
1322NN_TensorDesc *OH_NNExecutor_CreateInputTensorDesc(const OH_NNExecutor *executor, size_t index)
1323```
1324
1325**Description**
1326
1327Creates the description of an input tensor based on the specified index value.
1328
1329The description contains all types of attribute values of the tensor. If the value of **index** reaches or exceeds the number of input tensors, an error is returned. You can obtain the number of input tensors by calling [OH_NNExecutor_GetInputCount](capi-neural-network-core-h.md#oh_nnexecutor_getinputcount).
1330
1331**Since**: 11
1332
1333
1334**Parameters**
1335
1336| Name| Description|
1337| -- | -- |
1338| [const OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) *executor | Pointer to the [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) instance.|
1339| size_t index | Index value of the input tensor.|
1340
1341**Returns**
1342
1343| Type| Description|
1344| -- | -- |
1345| [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) * | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance. If the operation fails, **NULL** is returned.|
1346
1347### OH_NNExecutor_CreateOutputTensorDesc()
1348
1349```
1350NN_TensorDesc *OH_NNExecutor_CreateOutputTensorDesc(const OH_NNExecutor *executor, size_t index)
1351```
1352
1353**Description**
1354
1355Creates the description of an output tensor based on the specified index value.
1356
1357The description contains all types of attribute values of the tensor. If the value of **index** reaches or exceeds the number of output tensors, an error is returned. You can obtain the number of output tensors by calling [OH_NNExecutor_GetOutputCount](capi-neural-network-core-h.md#oh_nnexecutor_getoutputcount).
1358
1359**Since**: 11
1360
1361
1362**Parameters**
1363
1364| Name| Description|
1365| -- | -- |
1366| [const OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) *executor | Pointer to the [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) instance.|
1367| size_t index | Index value of the output tensor.|
1368
1369**Returns**
1370
1371| Type| Description|
1372| -- | -- |
1373| [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) * | Pointer to the [NN_TensorDesc](capi-neuralnetworkruntime-nn-tensordesc.md) instance. If the operation fails, **NULL** is returned.|
1374
1375### OH_NNExecutor_GetInputDimRange()
1376
1377```
1378OH_NN_ReturnCode OH_NNExecutor_GetInputDimRange(const OH_NNExecutor *executor,size_t index,size_t **minInputDims,size_t **maxInputDims,size_t *shapeLength)
1379```
1380
1381**Description**
1382
1383Obtains the dimension range of all input tensors.
1384
1385If the input tensor has a dynamic shape, the dimension range supported by the tensor may vary according to device. You can call this API to obtain the dimension range supported by the current device. **\*minInputDims** saves the minimum dimension of the specified input tensor (the number of dimensions matches the shape), while **\*maxInputDims** saves the maximum dimension.
1386
1387For example, if an input tensor has a dynamic shape of [-1, -1, -1, 3], **\*minInputDims** may be [1, 10, 10, 3], and **\*maxInputDims** may be [100, 1024, 1024, 3].
1388
1389If the value of **index** reaches or exceeds the number of input tensors, an error is returned. You can obtain the number of input tensors by calling [OH_NNExecutor_GetInputCount](capi-neural-network-core-h.md#oh_nnexecutor_getinputcount).
1390
1391As output parameters, <b>\*minInputDims</b> and <b>\*maxInputDims</b> cannot be null pointers. Otherwise, an error is returned. For example, you should define **int32_t\* minInDims = NULL**, and then pass **&minInDims** as an input parameter.
1392
1393You do not need to release the memory of <b>minInputDims</b> and <b>maxInputDims</b>. It is released with <b>executor</b>.
1394
1395**Since**: 11
1396
1397**Parameters**
1398
1399| Name| Description|
1400| -- | -- |
1401| [const OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) *executor | Pointer to the [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) instance.|
1402| size_t index | Index value of the input tensor.|
1403| size_t **minInputDims | Pointer to the returned array, which saves the minimum dimension of the specified input tensor (the number of dimensions matches the shape).|
1404| size_t **maxInputDims | Pointer to the returned array, which saves the maximum dimension of the specified input tensor (the number of dimensions matches the shape).|
1405| size_t *shapeLength | Number of dimensions of the returned input tensor, which is the same as the shape.|
1406
1407**Returns**
1408
1409| Type| Description|
1410| -- | -- |
1411| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
1412
1413### OH_NNExecutor_SetOnRunDone()
1414
1415```
1416OH_NN_ReturnCode OH_NNExecutor_SetOnRunDone(OH_NNExecutor *executor, NN_OnRunDone onRunDone)
1417```
1418
1419**Description**
1420
1421Sets the callback processing function invoked when the asynchronous inference ends.
1422
1423For details about the callback function, see [NN_OnRunDone](capi-neural-network-runtime-type-h.md#nn_onrundone).
1424
1425**Since**: 11
1426
1427
1428**Parameters**
1429
1430| Name| Description|
1431| -- | -- |
1432| [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) *executor | Pointer to the [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) instance.|
1433| [NN_OnRunDone](capi-neural-network-runtime-type-h.md#nn_onrundone) onRunDone | Handle of the callback function [NN_OnRunDone](capi-neural-network-runtime-type-h.md#nn_onrundone).|
1434
1435**Returns**
1436
1437| Type| Description|
1438| -- | -- |
1439| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
1440
1441### OH_NNExecutor_SetOnServiceDied()
1442
1443```
1444OH_NN_ReturnCode OH_NNExecutor_SetOnServiceDied(OH_NNExecutor *executor, NN_OnServiceDied onServiceDied)
1445```
1446
1447**Description**
1448
1449Sets the callback processing function invoked when the device driver service terminates unexpectedly during asynchronous inference.
1450
1451For details about the callback function, see [NN_OnServiceDied](capi-neural-network-runtime-type-h.md#nn_onservicedied).
1452
1453**Since**: 11
1454
1455
1456**Parameters**
1457
1458| Name| Description|
1459| -- | -- |
1460| [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) *executor | Pointer to the [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) instance.|
1461| [NN_OnServiceDied](capi-neural-network-runtime-type-h.md#nn_onservicedied) onServiceDied | Handle of the callback function [NN_OnServiceDied](capi-neural-network-runtime-type-h.md#nn_onservicedied).|
1462
1463**Returns**
1464
1465| Type| Description|
1466| -- | -- |
1467| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
1468
1469### OH_NNExecutor_RunSync()
1470
1471```
1472OH_NN_ReturnCode OH_NNExecutor_RunSync(OH_NNExecutor *executor,NN_Tensor *inputTensor[],size_t inputCount,NN_Tensor *outputTensor[],size_t outputCount)
1473```
1474
1475**Description**
1476
1477Performs synchronous inference.
1478
1479You need to create input and output tensors by calling [OH_NNTensor_Create](capi-neural-network-core-h.md#oh_nntensor_create), [OH_NNTensor_CreateWithSize](capi-neural-network-core-h.md#oh_nntensor_createwithsize), or [OH_NNTensor_CreateWithFd](capi-neural-network-core-h.md#oh_nntensor_createwithfd). Then, use [OH_NNTensor_GetDataBuffer](capi-neural-network-core-h.md#oh_nntensor_getdatabuffer) to obtain the pointer to tensor data and copy the input data to it. The executor performs model inference, generates the inference result, and writes the result to the output tensor.
1480
1481If the output tensor has a dynamic shape, you can obtain the actual shape of the output tensor by calling [OH_NNExecutor_GetOutputShape](capi-neural-network-core-h.md#oh_nnexecutor_getoutputshape). Alternatively, obtain the tensor description from the input tensor by calling [OH_NNTensor_GetTensorDesc](capi-neural-network-core-h.md#oh_nntensor_gettensordesc), and then obtain the actual shape by calling [OH_NNTensorDesc_GetShape](capi-neural-network-core-h.md#oh_nntensordesc_getshape).
1482
1483**Since**: 11
1484
1485
1486**Parameters**
1487
1488| Name                                                                  | Description|
1489|-----------------------------------------------------------------------| -- |
1490| [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) *executor | Pointer to the [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) instance.|
1491| [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) *inputTensor[]                                          | Array of input tensors.|
1492| size_t inputCount                                                     | Number of input tensors.|
1493| [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) *outputTensor[]                                             | Array of output tensors.|
1494| size_t outputCount                                                    | Number of output tensors.|
1495
1496**Returns**
1497
1498| Type| Description|
1499| -- | -- |
1500| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
1501
1502### OH_NNExecutor_RunAsync()
1503
1504```
1505OH_NN_ReturnCode OH_NNExecutor_RunAsync(OH_NNExecutor *executor,NN_Tensor *inputTensor[],size_t inputCount,NN_Tensor *outputTensor[],size_t outputCount,int32_t timeout,void *userData)
1506```
1507
1508**Description**
1509
1510Performs asynchronous inference.
1511
1512You need to create input and output tensors by calling [OH_NNTensor_Create](capi-neural-network-core-h.md#oh_nntensor_create), [OH_NNTensor_CreateWithSize](capi-neural-network-core-h.md#oh_nntensor_createwithsize), or [OH_NNTensor_CreateWithFd](capi-neural-network-core-h.md#oh_nntensor_createwithfd). Then, use [OH_NNTensor_GetDataBuffer](capi-neural-network-core-h.md#oh_nntensor_getdatabuffer) to obtain the pointer to tensor data and copy the input data to it. The executor performs model inference, generates the inference result, and writes the result to the output tensor.
1513
1514If the output tensor has a dynamic shape, you can obtain the actual shape of the output tensor by calling [OH_NNExecutor_GetOutputShape](capi-neural-network-core-h.md#oh_nnexecutor_getoutputshape). Alternatively, obtain the tensor description from the input tensor by calling [OH_NNTensor_GetTensorDesc](capi-neural-network-core-h.md#oh_nntensor_gettensordesc), and then obtain the actual shape by calling [OH_NNTensorDesc_GetShape](capi-neural-network-core-h.md#oh_nntensordesc_getshape).
1515
1516This API works in non-blocking mode and returns the result immediately after being called. You can obtain the inference result and execution return status through the [NN_OnRunDone](capi-neural-network-runtime-type-h.md#nn_onrundone) callback. If the device driver service stops abnormally during execution, you can use the [NN_OnServiceDied](capi-neural-network-runtime-type-h.md#nn_onservicedied) callback for exception processing.
1517
1518You can set the NN_OnRunDone](capi-neural-network-runtime-type-h.md#nn_onrundone) and [NN_OnServiceDied](capi-neural-network-runtime-type-h.md#nn_onservicedied) callbacks by calling [OH_NNExecutor_SetOnRunDone](capi-neural-network-core-h.md#oh_nnexecutor_setonrundone) and [OH_NNExecutor_SetOnServiceDied](capi-neural-network-core-h.md#oh_nnexecutor_setonservicedied).
1519
1520If the inference times out, it is terminated immediately and the error code [OH_NN_TIMEOUT](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned through the [NN_OnRunDone](capi-neural-network-runtime-type-h.md#nn_onrundone) callback.
1521
1522**userData** is the identifier used to distinguish different asynchronous inferences and is returned as the first parameter in the callback. You can use any data that can distinguish different inferences as the identifier.
1523
1524**Since**: 11
1525
1526
1527**Parameters**
1528
1529| Name| Description|
1530| -- | -- |
1531| [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) *executor | Pointer to the [OH_NNExecutor](capi-neuralnetworkruntime-oh-nnexecutor.md) instance.|
1532| [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) *inputTensor[] | Array of input tensors.|
1533| size_t inputCount | Number of input tensors.|
1534| [NN_Tensor](capi-neuralnetworkruntime-nn-tensor.md) *outputTensor[] | Array of output tensors.|
1535| size_t outputCount | Number of output tensors.|
1536| int32_t timeout | Timeout interval of asynchronous inference, in ms. For example, **1000** indicates that the timeout interval is 1000 ms.|
1537| void *userData | Identifier of asynchronous inference.|
1538
1539**Returns**
1540
1541| Type| Description|
1542| -- | -- |
1543| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
1544
1545### OH_NNDevice_GetAllDevicesID()
1546
1547```
1548OH_NN_ReturnCode OH_NNDevice_GetAllDevicesID(const size_t **allDevicesID, uint32_t *deviceCount)
1549```
1550
1551**Description**
1552
1553Obtains the ID of the device connected to NNRt.
1554
1555Each device has a unique and fixed ID, which is returned through a uint32_t array.
1556
1557When device IDs are returned through the size_t array, each element of the array is the ID of a single device. Internal management is used for array memory. The data pointer remains valid before this API is called next time.
1558
1559**Since**: 9
1560
1561
1562**Parameters**
1563
1564| Name| Description|
1565| -- | -- |
1566| const size_t **allDevicesID | Pointer to the **size_t** array. The input *allDevicesID must be a null pointer. Otherwise, the error code [OH_NN_INVALID_PARAMETER](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned.|
1567| uint32_t *deviceCount | Pointer of the uint32_t type, which is used to return the length of **\*allDevicesID**.|
1568
1569**Returns**
1570
1571| Type| Description|
1572| -- | -- |
1573| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
1574
1575### OH_NNDevice_GetName()
1576
1577```
1578OH_NN_ReturnCode OH_NNDevice_GetName(size_t deviceID, const char **name)
1579```
1580
1581**Description**
1582
1583Obtains the name of the specified device.
1584
1585**deviceID** specifies the device ID used to obtain the device name. The device ID needs to be obtained by calling [OH_NNDevice_GetAllDevicesID](capi-neural-network-core-h.md#oh_nndevice_getalldevicesid). If the value of **deviceID** is **0**, the first device in the device list is used by default. **\*name** is a C-style string ended with **'\0'**. **\*name** must be a null pointer. Otherwise, the error code [OH_NN_INVALID_PARAMETER](capi-neural-network-runtime-type-h.md#oh_nn_returncode) is returned. For example, you should define **char\* deviceName = NULL**, and then pass **&deviceName** as an input parameter.
1586
1587**Since**: 9
1588
1589**Parameters**
1590
1591| Name| Description|
1592| -- | -- |
1593| size_t deviceID | Device ID. If the value of **deviceID** is **0**, the first device in the device list is used by default.|
1594| const char **name | Pointer to the char array, which saves the returned device name.|
1595
1596**Returns**
1597
1598| Type| Description|
1599| -- | -- |
1600| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
1601
1602### OH_NNDevice_GetType()
1603
1604```
1605OH_NN_ReturnCode OH_NNDevice_GetType(size_t deviceID, OH_NN_DeviceType *deviceType)
1606```
1607
1608**Description**
1609
1610Obtains the type of the specified device.
1611
1612**deviceID** specifies the device ID used to obtain the device type. If the value of **deviceID** is **0**, the first device in the device list is used by default. Currently, the following device types are supported:
1613
1614- **OH_NN_CPU**: CPU device.
1615
1616- **OH_NN_GPU**: GPU device.
1617- **OH_NN_ACCELERATOR**: machine learning dedicated accelerator.
1618- * - **OH_NN_OTHERS**: other device types.
1619
1620**Since**: 9
1621
1622
1623**Parameters**
1624
1625| Name| Description|
1626| -- | -- |
1627| size_t deviceID | Device ID. If the value of **deviceID** is **0**, the first device in the device list is used by default.|
1628| [OH_NN_DeviceType](capi-neural-network-runtime-type-h.md#oh_nn_devicetype) *deviceType | Pointer to the [OH_NN_DeviceType](capi-neural-network-runtime-type-h.md#oh_nn_devicetype) instance. The device type information is returned.|
1629
1630**Returns**
1631
1632| Type| Description|
1633| -- | -- |
1634| [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode) | Execution result of the function. If the operation is successful, **OH_NN_SUCCESS** is returned. If the operation fails, an error is returned. For details about the error codes, see [OH_NN_ReturnCode](capi-neural-network-runtime-type-h.md#oh_nn_returncode).|
1635