1# NNRt Development 2 3## Overview 4 5### Function Introduction 6 7Neural Network Runtime (NNRt) functions as a bridge to connect the upper-layer AI inference framework and bottom-layer acceleration chips, implementing cross-chip inference computing of AI models. 8 9With the open APIs provided by NNRt, chip vendors can connect their dedicated acceleration chips to NNRt to access the OpenHarmony ecosystem. 10 11### Basic Concepts 12Before you get started, it would be helpful for you to have a basic understanding of the following concepts: 13 14- Hardware Device Interface (HDI): defines APIs for cross-process communication between services in OpenHarmony. 15- Interface Description Language (IDL): defines the HDI language format. 16 17### Constraints 18- System version: OpenHarmony trunk version. 19- Development environment: Ubuntu 18.04 or later. 20- Access device: a chip with AI computing capabilities. 21 22### Working Principles 23NNRt connects to device chips through HDIs, which implement cross-process communication between services. 24 25**Figure 1** NNRt architecture 26 27![NNRt architecture](./figures/nnrt_arch_diagram.png) 28 29The NNRt architecture consists of three layers: AI applications at the application layer, AI inference framework and NNRt at the system layer, and device services at the chip layer. To use a dedicated acceleration chip model for inference, an AI application needs to call the dedicated acceleration chip at the bottom layer through the AI inference framework and NNRt. NNRt is responsible for adapting to various dedicated acceleration chips at the bottom layer. It opens standard and unified HDIs for various chips to access the OpenHarmony ecosystem. 30 31During program running, the AI application, AI inference framework, and NNRt reside in the user process, and the underlying device services reside in the service process. NNRt implements the HDI client and the service side implements HDI Service to fulfill cross-process communication. 32 33## How to Develop 34 35### Application Scenario 36Suppose you are connecting an AI acceleration chip, for example, RK3568, to NNRt. The following describes how to connect the RK3568 chip to NNRt through the HDI to implement AI model inference. 37> **NOTE**<br>In this application scenario, the connection of the RK3568 chip to NNRt is implemented by utilizing the CPU operator of MindSpore Lite, instead of writing the CPU driver. This is the reason why the following development procedure depends on the dynamic library and header file of MindSpore Lite. In practice, the development does not depend on any library or header file of MindSpore Lite. 38 39### Development Flowchart 40The following figure shows the process of connecting a dedicated acceleration chip to NNRt. 41 42**Figure 2** NNRt development flowchart 43 44![NNRt development flowchart](./figures/nnrt_dev_flow.png) 45 46### Development Procedure 47To connect the acceleration chip to NNRt, perform the development procedure below. 48#### Generating the HDI Header File 49Download the OpenHarmony source code from the open source community, build the `drivers_interface` component, and generate the HDI header file. 50 511. [Download the source code](../get-code/sourcecode-acquire.md). 52 532. Build the IDL file of the HDI. 54 ```shell 55 ./build.sh --product-name productname –ccache --build-target drivers_interface_nnrt 56 ``` 57 > **NOTE**<br>**productname** indicates the product name. In this example, the product name is **RK3568**. 58 59 When the build is complete, the HDI header file is generated in `out/rk3568/gen/drivers/interface/nnrt`. The default programming language is C++. To generate a header file for the C programming language, run the following command to set the `language` field in the `drivers/interface/nnrt/v1_0/BUILD.gn` file before starting the build: 60 61 ```shell 62 language = "c" 63 ``` 64 65 The directory of the generated header file is as follows: 66 ```text 67 out/rk3568/gen/drivers/interface/nnrt 68 └── v1_0 69 ├── drivers_interface_nnrt__libnnrt_proxy_1.0_external_deps_temp.json 70 ├── drivers_interface_nnrt__libnnrt_stub_1.0_external_deps_temp.json 71 ├── innrt_device.h # Header file of the HDI 72 ├── iprepared_model.h # Header file of the AI model 73 ├── libnnrt_proxy_1.0__notice.d 74 ├── libnnrt_stub_1.0__notice.d 75 ├── model_types.cpp # Implementation file for AI model structure definition 76 ├── model_types.h # Header file for AI model structure definition 77 ├── nnrt_device_driver.cpp # Device driver implementation example 78 ├── nnrt_device_proxy.cpp 79 ├── nnrt_device_proxy.h 80 ├── nnrt_device_service.cpp # Implementation file for device services 81 ├── nnrt_device_service.h # Header file for device services 82 ├── nnrt_device_stub.cpp 83 ├── nnrt_device_stub.h 84 ├── nnrt_types.cpp # Implementation file for data type definition 85 ├── nnrt_types.h # Header file for data type definition 86 ├── node_attr_types.cpp # Implementation file for AI model operator attribute definition 87 ├── node_attr_types.h # Header file for AI model operator attribute definition 88 ├── prepared_model_proxy.cpp 89 ├── prepared_model_proxy.h 90 ├── prepared_model_service.cpp # Implementation file for AI model services 91 ├── prepared_model_service.h # Header file for AI model services 92 ├── prepared_model_stub.cpp 93 └── prepared_model_stub.h 94 ``` 95 96#### Implementing the HDI Service 97 981. Create a development directory in `drivers/peripheral`. The structure of the development directory is as follows: 99 ```text 100 drivers/peripheral/nnrt 101 ├── BUILD.gn # Code build script 102 ├── bundle.json 103 └── hdi_cpu_service # Customized directory 104 ├── BUILD.gn # Code build script 105 ├── include 106 │ ├── nnrt_device_service.h # Header file for device services 107 │ ├── node_functions.h # Optional, depending on the actual implementation 108 │ ├── node_registry.h # Optional, depending on the actual implementation 109 │ └── prepared_model_service.h # Header file for AI model services 110 └── src 111 ├── nnrt_device_driver.cpp # Implementation file for the device driver 112 ├── nnrt_device_service.cpp # Implementation file for device services 113 ├── nnrt_device_stub.cpp # Optional, depending on the actual implementation 114 ├── node_attr_types.cpp # Optional, depending on the actual implementation 115 ├── node_functions.cpp # Optional, depending on the actual implementation 116 ├── node_registry.cpp # Optional, depending on the actual implementation 117 └── prepared_model_service.cpp # Implementation file for AI model services 118 ``` 119 1202. Implement the device driver. Unless otherwise required, you can directly use the `nnrt_device_driver.cpp` file generated in step 1. 121 1223. Implement service APIs. For details, see the `nnrt_device_service.cpp` and `prepared_model_service.cpp` implementation files. For details about the API definition, see [NNRt HDI Definitions](https://gitee.com/openharmony/drivers_interface/tree/master/nnrt). 123 1244. Build the implementation files for device drivers and services as shared libraries. 125 126 Create the `BUILD.gn` file with the following content in `drivers/peripheral/nnrt/hdi_cpu_service/`. For details about how to set related parameters, see [Compilation and Building](https://gitee.com/openharmony/build). 127 128 ```shell 129 import("//build/ohos.gni") 130 import("//drivers/hdf_core/adapter/uhdf2/uhdf.gni") 131 132 ohos_shared_library("libnnrt_service_1.0") { 133 include_dirs = [] 134 sources = [ 135 "src/nnrt_device_service.cpp", 136 "src/prepared_model_service.cpp", 137 "src/node_registry.cpp", 138 "src/node_functions.cpp", 139 "src/node_attr_types.cpp" 140 ] 141 public_deps = [ "//drivers/interface/nnrt/v1_0:nnrt_idl_headers" ] 142 external_deps = [ 143 "hdf_core:libhdf_utils", 144 "hiviewdfx_hilog_native:libhilog", 145 "ipc:ipc_single", 146 "c_utils:utils", 147 ] 148 149 install_images = [ chipset_base_dir ] 150 subsystem_name = "hdf" 151 part_name = "drivers_peripheral_nnrt" 152 } 153 154 ohos_shared_library("libnnrt_driver") { 155 include_dirs = [] 156 sources = [ "src/nnr_device_driver.cpp" ] 157 deps = [ "//drivers/peripheral/nnrt/hdi_cpu_service:libnnrt_service_1.0" ] 158 159 external_deps = [ 160 "hdf_core:libhdf_host", 161 "hdf_core:libhdf_ipc_adapter", 162 "hdf_core:libhdf_utils", 163 "hiviewdfx_hilog_native:libhilog", 164 "ipc:ipc_single", 165 "c_utils:utils", 166 ] 167 168 install_images = [ chipset_base_dir ] 169 subsystem_name = "hdf" 170 part_name = "drivers_peripheral_nnrt" 171 } 172 173 group("hdf_nnrt_service") { 174 deps = [ 175 ":libnnrt_driver", 176 ":libnnrt_service_1.0", 177 ] 178 } 179 ``` 180 181 Add `group("hdf_nnrt_service")` to the `drivers/peripheral/nnrt/BUILD.gn` file so that it can be referenced at a higher directory level. 182 ```shell 183 if (defined(ohos_lite)) { 184 group("nnrt_entry") { 185 deps = [ ] 186 } 187 } else { 188 group("nnrt_entry") { 189 deps = [ 190 "./hdi_cpu_service:hdf_nnrt_service", 191 ] 192 } 193 } 194 ``` 195 196 Create the `drivers/peripheral/nnrt/bundle.json` file to define the new `drivers_peripheral_nnrt` component. 197 ```json 198 { 199 "name": "drivers_peripheral_nnrt", 200 "description": "Neural network runtime device driver", 201 "version": "3.2", 202 "license": "Apache License 2.0", 203 "component": { 204 "name": "drivers_peripheral_nnrt", 205 "subsystem": "hdf", 206 "syscap": [""], 207 "adapter_system_type": ["standard"], 208 "rom": "1024KB", 209 "ram": "2048KB", 210 "deps": { 211 "components": [ 212 "ipc", 213 "hdf_core", 214 "hiviewdfx_hilog_native", 215 "c_utils" 216 ], 217 "third_part": [ 218 "bounds_checking_function" 219 ] 220 }, 221 "build": { 222 "sub_component": [ 223 "//drivers/peripheral/nnrt:nnrt_entry" 224 ], 225 "test": [ 226 ], 227 "inner_kits": [ 228 ] 229 } 230 } 231 } 232 ``` 233 234#### Declaring the HDI Service 235 236 In the uhdf directory, declare the user-mode driver and services in the **.hcs** file of the corresponding product. For example, for the RK3568 chip, add the following configuration to the `vendor/hihope/rk3568/hdf_config/uhdf/device_info.hcs` file: 237 ```text 238 nnrt :: host { 239 hostName = "nnrt_host"; 240 priority = 50; 241 uid = ""; 242 gid = ""; 243 caps = ["DAC_OVERRIDE", "DAC_READ_SEARCH"]; 244 nnrt_device :: device { 245 device0 :: deviceNode { 246 policy = 2; 247 priority = 100; 248 moduleName = "libnnrt_driver.z.so"; 249 serviceName = "nnrt_device_service"; 250 } 251 } 252 } 253 ``` 254> **NOTE**<br>After modifying the `.hcs` file, you need to delete the `out` directory and build the file again for the modification to take effect. 255 256#### Configuring the User ID and Group ID of the Host Process 257 In the scenario of creating an nnrt_host process, you need to configure the user ID and group ID of the corresponding process. The user ID is configured in the `base/startup/init/services/etc/passwd` file, and the group ID is configured in the `base/startup/init/services/etc/group` file. 258 ```text 259 # Add the user ID in base/startup/init/services/etc/passwd. 260 nnrt_host:x:3311:3311:::/bin/false 261 262 # Add the group ID in base/startup/init/services/etc/group. 263 nnrt_host:x:3311: 264 ``` 265 266#### Configuring SELinux 267 268The SELinux feature has been enabled for the OpenHarmony. You need to configure SELinux rules for the new processes and services so that they can run the host process to access certain resources and release HDI services. 269 2701. Configure the security context of the **nnrt_host** process in the `base/security/selinux/sepolicy/ohos_policy/drivers/adapter/vendor/type.te` file. 271 ```text 272 # Add the security context configuration. 273 type nnrt_host, hdfdomain, domain; 274 ``` 275 > In the preceding command, **nnrt_host** indicates the process name previously configured. 276 2772. Configure access permissions because SELinux uses the trustlist access permission mechanism. Upon service startup, run the `dmesg` command to view the AVC alarm, 278which provides a list of missing permissions. For details about the SELinux configuration, see [security_selinux] (https://gitee.com/openharmony/security_selinux/blob/master/README-en.md). 279 ```shell 280 hdc_std shell 281 dmesg | grep nnrt 282 ``` 283 2843. Create the `nnrt_host.te` file. 285 ```shell 286 # Create the nnrt folder. 287 mkdir base/security/selinux/sepolicy/ohos_policy/drivers/peripheral/nnrt 288 289 # Create the vendor folder. 290 mkdir base/security/selinux/sepolicy/ohos_policy/drivers/peripheral/nnrt/vendor 291 292 # Create the nnrt_host.te file. 293 touch base/security/selinux/sepolicy/ohos_policy/drivers/peripheral/nnrt/vendor/nnrt_host.te 294 ``` 295 2964. Add the required permissions to the `nnrt_host.te` file. For example: 297 ```text 298 allow nnrt_host dev_hdf_kevent:chr_file { ioctl }; 299 allow nnrt_host hilog_param:file { read }; 300 allow nnrt_host sh:binder { transfer }; 301 allow nnrt_host dev_ashmem_file:chr_file { open }; 302 allow sh nnrt_host:fd { use }; 303 ``` 304 305#### Configuring the Component Build Entry 306Access the `chipset_common.json` file. 307```shell 308vim //productdefine/common/inherit/chipset_common.json 309``` 310Add the following to `"subsystems"`, `"subsystem":"hdf"`, `"components"`: 311```shell 312{ 313 "component": "drivers_peripheral_foo", 314 "features": [] 315} 316``` 317 318#### Deleting the out Directory and Building the Entire System 319```shell 320# Delete the out directory. 321rm -rf ./out 322 323# Build the entire system. 324./build.sh --product-name rk3568 –ccache --jobs=4 325``` 326 327 328### Commissioning and Verification 329On completion of service development, you can use XTS to verify its basic functions and compatibility. 330 3311. Build the **hats** test cases of NNRt in the `test/xts/hats/hdf/nnrt` directory. 332 ```shell 333 # Go to the hats directory. 334 cd test/xts/hats 335 336 # Build the hats test cases. 337 ./build.sh suite=hats system_size=standard --product-name rk3568 338 339 # Return to the root directory. 340 cd - 341 ``` 342 The hats test cases are exported to `out/rk3568/suites/hats/testcases/HatsHdfNnrtFunctionTest` in the relative code root directory. 343 3442. Push the test cases to the device. 345 ```shell 346 # Push the executable file of test cases to the device. In this example, the executable file is HatsHdfNnrtFunctionTest. 347 hdc_std file send out/rk3568/suites/hats/testcases/HartsHdfNnrtFunctionTest /data/local/tmp/ 348 349 # Grant required permissions to the executable file of test cases. 350 hdc_std shell "chmod +x /data/local/tmp/HatsHdfNnrtFunctionTest" 351 ``` 352 3533. Execute the test cases and view the result. 354 ```shell 355 # Execute the test cases. 356 hdc_std shell "/data/local/tmp/HatsHdfNnrtFunctionTest" 357 ``` 358 359 The test report below shows that all 47 test cases are successful, indicating that the service has passed the compatibility test. 360 ```text 361 ... 362 [----------] Global test environment tear-down 363 Gtest xml output finished 364 [==========] 47 tests from 3 test suites ran. (515 ms total) 365 [ PASSED ] 47 tests. 366 ``` 367 368### Development Example 369For the complete demo code, see [NNRt Service Implementation Example](https://gitee.com/openharmony/ai_neural_network_runtime/tree/master/example/drivers). 370 3711. Copy the `example/driver/nnrt` directory to `drivers/peripheral`. 372 ```shell 373 cp -r example/driver/nnrt drivers/peripheral 374 ``` 375 3762. Add the `bundle.json` file to `drivers/peripheral/nnrt`. For details about the `bundle.json` file, see [Development Procedure](#development-procedure). 377 3783. Add the dependency files of MindSpore Lite because the demo depends on the CPU operator of MindSpore Lite. 379 - Download the header file of [MindSpore Lite 1.5.0](https://ms-release.obs.cn-north-4.myhuaweicloud.com/1.5.0/MindSpore/lite/release/linux/mindspore-lite-1.5.0-linux-x64.tar.gz). 380 - Create the `mindspore` directory in `drivers/peripheral/nnrt`. 381 ```shell 382 mkdir drivers/peripheral/nnrt/mindspore 383 ``` 384 - Decompress the `mindspore-lite-1.5.0-linux-x64.tar.gz` file, and copy the `runtime/include` directory to `drivers/peripheral/nnrt/mindspore`. 385 - Create and copy the `schema` file of MindSpore Lite. 386 ```shell 387 # Create the mindspore_schema directory. 388 mkdir drivers/peripheral/nnrt/hdi_cpu_service/include/mindspore_schema 389 390 # Copy the schema file of MindSpore Lite. 391 cp third_party/mindspore/mindspore/lite/schema/* drivers/peripheral/nnrt/hdi_cpu_service/include/mindspore_schema/ 392 ``` 393 - Build the dynamic library of MindSpore Lite, and put the dynamic library in the `mindspore`directory. 394 ```shell 395 # Build the dynamic library of MindSpore Lite. 396 ./build.sh --product-name rk3568 -ccaache --jobs 4 --build-target mindspore_lib 397 398 # Create the mindspore subdirectory. 399 mkdir drivers/peripheral/nnrt/mindspore/mindspore 400 401 # Copy the dynamic library to drivers/peripheral/nnrt/mindspore/mindspore. 402 cp out/rk3568/package/phone/system/lib/libmindspore-lite.huawei.so drivers/peripheral/nnrt/mindspore/mindspore/ 403 ``` 4044. Follow the [development procedure](#development-procedure) to complete other configurations. 405