• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Using MindSpore Lite for Image Classification (C/C++)
2
3## When to Use
4
5You can use [MindSpore](../../reference/apis-mindspore-lite-kit/capi-mindspore.md) to quickly deploy AI algorithms into your application to perform AI model inference for image classification.
6
7Image classification can be used to recognize objects in images and is widely used in medical image analysis, auto driving, e-commerce, and facial recognition.
8
9## Basic Concepts
10
11- N-API: a set of native APIs used to build ArkTS components. N-APIs can be used to encapsulate C/C++ libraries into ArkTS modules.
12
13## Development Process
14
151. Select an image classification model.
162. Use the MindSpore Lite inference model on the device to classify the selected images.
17
18## Environment Setup
19
20Install DevEco Studio 4.1 or later, and update the SDK to API version 11 or later.
21
22## How to Develop
23
24The following uses inference on an image in the album as an example to describe how to use MindSpore Lite to implement image classification.
25
26### Selecting a Model
27
28This sample application uses [mobilenetv2.ms](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/1.5/mobilenetv2.ms) as the image classification model. The model file is available in the **entry/src/main/resources/rawfile** project directory.
29
30If you have other pre-trained models for image classification, convert the original model into the .ms format by referring to [Using MindSpore Lite for Model Conversion](mindspore-lite-converter-guidelines.md).
31
32### Writing Code
33
34#### Image Input and Preprocessing
35
361. Call [@ohos.file.picker](../../reference/apis-core-file-kit/js-apis-file-picker.md) to pick up the desired image in the album.
37
382. Based on the input image size, call [[@ohos.multimedia.image](../../reference/apis-image-kit/arkts-apis-image.md) and [@ohos.file.fs](../../reference/apis-core-file-kit/js-apis-file-fs.md) to perform operations such as cropping the image, obtaining the image buffer, and standardizing the image.
39
40   ```ts
41   // Index.ets
42   import { fileIo } from '@kit.CoreFileKit';
43   import { photoAccessHelper } from '@kit.MediaLibraryKit';
44   import { BusinessError } from '@kit.BasicServicesKit';
45   import { image } from '@kit.ImageKit';
46
47   @Entry
48   @Component
49   struct Index {
50     @State modelName: string = 'mobilenetv2.ms';
51     @State modelInputHeight: number = 224;
52     @State modelInputWidth: number = 224;
53     @State uris: Array<string> = [];
54
55     build() {
56       Row() {
57         Column() {
58           Button() {
59             Text('photo')
60               .fontSize(30)
61               .fontWeight(FontWeight.Bold)
62           }
63           .type(ButtonType.Capsule)
64           .margin({
65             top: 20
66           })
67           .backgroundColor('#0D9FFB')
68           .width('40%')
69           .height('5%')
70           .onClick(() => {
71             let resMgr = this.getUIContext()?.getHostContext()?.getApplicationContext().resourceManager;
72
73             // Obtain images in an album.
74             // 1. Create an image picker instance.
75             let photoSelectOptions = new photoAccessHelper.PhotoSelectOptions();
76
77             // 2. Set the media file type to IMAGE and set the maximum number of media files that can be selected.
78             photoSelectOptions.MIMEType = photoAccessHelper.PhotoViewMIMETypes.IMAGE_TYPE;
79             photoSelectOptions.maxSelectNumber = 1;
80
81             // 3. Create an album picker instance and call select() to open the album page for file selection. After file selection is done, the result set is returned through photoSelectResult.
82             let photoPicker = new photoAccessHelper.PhotoViewPicker();
83             photoPicker.select(photoSelectOptions,
84               async (err: BusinessError, photoSelectResult: photoAccessHelper.PhotoSelectResult) => {
85                 if (err) {
86                   console.error('MS_LITE_ERR: PhotoViewPicker.select failed with err: ' + JSON.stringify(err));
87                   return;
88                 }
89                 console.info('MS_LITE_LOG: PhotoViewPicker.select successfully, photoSelectResult uri: ' +
90                 JSON.stringify(photoSelectResult));
91                 this.uris = photoSelectResult.photoUris;
92                 console.info('MS_LITE_LOG: uri: ' + this.uris);
93                 // Preprocess the image data.
94                 try {
95                   // 1. Based on the specified URI, call fileIo.openSync to open the file to obtain the FD.
96                   let file = fileIo.openSync(this.uris[0], fileIo.OpenMode.READ_ONLY);
97                   console.info('MS_LITE_LOG: file fd: ' + file.fd);
98
99                   // 2. Based on the FD, call fileIo.readSync to read the data in the file.
100                   let inputBuffer = new ArrayBuffer(4096000);
101                   let readLen = fileIo.readSync(file.fd, inputBuffer);
102                   console.info('MS_LITE_LOG: readSync data to file succeed and inputBuffer size is:' + readLen);
103
104                   // 3. Perform image preprocessing through PixelMap.
105                   let imageSource = image.createImageSource(file.fd);
106                   imageSource.createPixelMap().then((pixelMap) => {
107                     pixelMap.getImageInfo().then((info) => {
108                       console.info('MS_LITE_LOG: info.width = ' + info.size.width);
109                       console.info('MS_LITE_LOG: info.height = ' + info.size.height);
110                       // 4. Crop the image based on the input image size and obtain the image buffer readBuffer.
111                       pixelMap.scale(256.0 / info.size.width, 256.0 / info.size.height).then(() => {
112                         pixelMap.crop({
113                           x: 16,
114                           y: 16,
115                           size: { height: this.modelInputHeight, width: this.modelInputWidth }
116                         })
117                           .then(async () => {
118                             let info = await pixelMap.getImageInfo();
119                             console.info('MS_LITE_LOG: crop info.width = ' + info.size.width);
120                             console.info('MS_LITE_LOG: crop info.height = ' + info.size.height);
121                             // Set the size of readBuffer.
122                             let readBuffer = new ArrayBuffer(this.modelInputHeight * this.modelInputWidth * 4);
123                             await pixelMap.readPixelsToBuffer(readBuffer);
124                             console.info('MS_LITE_LOG: Succeeded in reading image pixel data, buffer: ' +
125                             readBuffer.byteLength);
126                             // Convert readBuffer to the float32 format, and standardize the image.
127                             const imageArr =
128                               new Uint8Array(readBuffer.slice(0, this.modelInputHeight * this.modelInputWidth * 4));
129                             console.info('MS_LITE_LOG: imageArr length: ' + imageArr.length);
130                             let means = [0.485, 0.456, 0.406];
131                             let stds = [0.229, 0.224, 0.225];
132                             let float32View = new Float32Array(this.modelInputHeight * this.modelInputWidth * 3);
133                             let index = 0;
134                             for (let i = 0; i < imageArr.length; i++) {
135                               if ((i + 1) % 4 == 0) {
136                                 float32View[index] = (imageArr[i - 3] / 255.0 - means[0]) / stds[0]; // B
137                                 float32View[index+1] = (imageArr[i - 2] / 255.0 - means[1]) / stds[1]; // G
138                                 float32View[index+2] = (imageArr[i - 1] / 255.0 - means[2]) / stds[2]; // R
139                                 index += 3;
140                               }
141                             }
142                             console.info('MS_LITE_LOG: float32View length: ' + float32View.length);
143                             let printStr = 'float32View data:';
144                             for (let i = 0; i < 20; i++) {
145                               printStr += ' ' + float32View[i];
146                             }
147                             console.info('MS_LITE_LOG: float32View data: ' + printStr);
148                           })
149                       })
150                     })
151                   })
152                 } catch (err) {
153                   console.error('MS_LITE_LOG: uri: open file fd failed.' + err);
154                 }
155               })
156           })
157         }.width('100%')
158       }
159       .height('100%')
160     }
161   }
162   ```
163
164#### Writing Inference Code
165
166Call [MindSpore](../../reference/apis-mindspore-lite-kit/capi-mindspore.md) to implement inference on the device. The operation process is as follows:
167
1681. Include the corresponding header file.
169
170   ```c++
171   #include <iostream>
172   #include <sstream>
173   #include <stdlib.h>
174   #include <hilog/log.h>
175   #include <rawfile/raw_file_manager.h>
176   #include <mindspore/types.h>
177   #include <mindspore/model.h>
178   #include <mindspore/context.h>
179   #include <mindspore/status.h>
180   #include <mindspore/tensor.h>
181   #include "napi/native_api.h"
182   ```
183
1842. Read the model file.
185
186   ```c++
187   #define LOGI(...) ((void)OH_LOG_Print(LOG_APP, LOG_INFO, LOG_DOMAIN, "[MSLiteNapi]", __VA_ARGS__))
188   #define LOGD(...) ((void)OH_LOG_Print(LOG_APP, LOG_DEBUG, LOG_DOMAIN, "[MSLiteNapi]", __VA_ARGS__))
189   #define LOGW(...) ((void)OH_LOG_Print(LOG_APP, LOG_WARN, LOG_DOMAIN, "[MSLiteNapi]", __VA_ARGS__))
190   #define LOGE(...) ((void)OH_LOG_Print(LOG_APP, LOG_ERROR, LOG_DOMAIN, "[MSLiteNapi]", __VA_ARGS__))
191
192   void *ReadModelFile(NativeResourceManager *nativeResourceManager, const std::string &modelName, size_t *modelSize) {
193       auto rawFile = OH_ResourceManager_OpenRawFile(nativeResourceManager, modelName.c_str());
194       if (rawFile == nullptr) {
195           LOGE("MS_LITE_ERR: Open model file failed");
196           return nullptr;
197       }
198       long fileSize = OH_ResourceManager_GetRawFileSize(rawFile);
199       void *modelBuffer = malloc(fileSize);
200       if (modelBuffer == nullptr) {
201           LOGE("MS_LITE_ERR: OH_ResourceManager_ReadRawFile failed");
202       }
203       int ret = OH_ResourceManager_ReadRawFile(rawFile, modelBuffer, fileSize);
204       if (ret == 0) {
205           LOGI("MS_LITE_LOG: OH_ResourceManager_ReadRawFile failed");
206           OH_ResourceManager_CloseRawFile(rawFile);
207           return nullptr;
208       }
209       OH_ResourceManager_CloseRawFile(rawFile);
210       *modelSize = fileSize;
211       return modelBuffer;
212   }
213   ```
214
2153. Create a context, set parameters such as the number of threads and device type, and load the model. The sample model does not support NNRt inference.
216
217   ```c++
218   void DestroyModelBuffer(void **buffer) {
219       if (buffer == nullptr) {
220           return;
221       }
222       free(*buffer);
223       *buffer = nullptr;
224   }
225
226   OH_AI_ContextHandle CreateMSLiteContext(void *modelBuffer) {
227       // Set executing context for model.
228       auto context = OH_AI_ContextCreate();
229       if (context == nullptr) {
230           DestroyModelBuffer(&modelBuffer);
231           LOGE("MS_LITE_ERR: Create MSLite context failed.\n");
232           return nullptr;
233       }
234       // The OH_AI_DeviceInfoCreate(OH_AI_DEVICETYPE_NNRT) option is not supported.
235       auto cpu_device_info = OH_AI_DeviceInfoCreate(OH_AI_DEVICETYPE_CPU);
236
237       OH_AI_DeviceInfoSetEnableFP16(cpu_device_info, true);
238       OH_AI_ContextAddDeviceInfo(context, cpu_device_info);
239
240       LOGI("MS_LITE_LOG: Build MSLite context success.\n");
241       return context;
242   }
243
244   OH_AI_ModelHandle CreateMSLiteModel(void *modelBuffer, size_t modelSize, OH_AI_ContextHandle context) {
245       // Create model
246       auto model = OH_AI_ModelCreate();
247       if (model == nullptr) {
248           DestroyModelBuffer(&modelBuffer);
249           LOGE("MS_LITE_ERR: Allocate MSLite Model failed.\n");
250           return nullptr;
251       }
252
253       // Build model object
254       auto build_ret = OH_AI_ModelBuild(model, modelBuffer, modelSize, OH_AI_MODELTYPE_MINDIR, context);
255       DestroyModelBuffer(&modelBuffer);
256       if (build_ret != OH_AI_STATUS_SUCCESS) {
257           OH_AI_ModelDestroy(&model);
258           LOGE("MS_LITE_ERR: Build MSLite model failed.\n");
259           return nullptr;
260       }
261       LOGI("MS_LITE_LOG: Build MSLite model success.\n");
262       return model;
263   }
264   ```
265
2664. Set the model input data and perform model inference.
267
268   ```c++
269   constexpr int K_NUM_PRINT_OF_OUT_DATA = 20;
270
271   // Set the model input data.
272   int FillInputTensor(OH_AI_TensorHandle input, std::vector<float> input_data) {
273       if (OH_AI_TensorGetDataType(input) == OH_AI_DATATYPE_NUMBERTYPE_FLOAT32) {
274           float *data = (float *)OH_AI_TensorGetMutableData(input);
275           for (size_t i = 0; i < OH_AI_TensorGetElementNum(input); i++) {
276               data[i] = input_data[i];
277           }
278           return OH_AI_STATUS_SUCCESS;
279       } else {
280           return OH_AI_STATUS_LITE_ERROR;
281       }
282   }
283
284   // Execute model inference.
285   int RunMSLiteModel(OH_AI_ModelHandle model, std::vector<float> input_data) {
286       // Set input data for model.
287       auto inputs = OH_AI_ModelGetInputs(model);
288
289       auto ret = FillInputTensor(inputs.handle_list[0], input_data);
290       if (ret != OH_AI_STATUS_SUCCESS) {
291           LOGE("MS_LITE_ERR: RunMSLiteModel set input error.\n");
292           return OH_AI_STATUS_LITE_ERROR;
293       }
294       // Get model output.
295       auto outputs = OH_AI_ModelGetOutputs(model);
296       // Predict model.
297       auto predict_ret = OH_AI_ModelPredict(model, inputs, &outputs, nullptr, nullptr);
298       if (predict_ret != OH_AI_STATUS_SUCCESS) {
299           LOGE("MS_LITE_ERR: MSLite Predict error.\n");
300           return OH_AI_STATUS_LITE_ERROR;
301       }
302       LOGI("MS_LITE_LOG: Run MSLite model Predict success.\n");
303       // Print output tensor data.
304       LOGI("MS_LITE_LOG: Get model outputs:\n");
305       for (size_t i = 0; i < outputs.handle_num; i++) {
306           auto tensor = outputs.handle_list[i];
307           LOGI("MS_LITE_LOG: - Tensor %{public}d name is: %{public}s.\n", static_cast<int>(i),
308                OH_AI_TensorGetName(tensor));
309           LOGI("MS_LITE_LOG: - Tensor %{public}d size is: %{public}d.\n", static_cast<int>(i),
310                (int)OH_AI_TensorGetDataSize(tensor));
311           LOGI("MS_LITE_LOG: - Tensor data is:\n");
312           auto out_data = reinterpret_cast<const float *>(OH_AI_TensorGetData(tensor));
313           std::stringstream outStr;
314           for (int i = 0; (i < OH_AI_TensorGetElementNum(tensor)) && (i <= K_NUM_PRINT_OF_OUT_DATA); i++) {
315               outStr << out_data[i] << " ";
316           }
317           LOGI("MS_LITE_LOG: %{public}s", outStr.str().c_str());
318       }
319       return OH_AI_STATUS_SUCCESS;
320   }
321   ```
322
3235. Implement a complete model inference process.
324
325   ```c++
326   static napi_value RunDemo(napi_env env, napi_callback_info info) {
327       LOGI("MS_LITE_LOG: Enter runDemo()");
328       napi_value error_ret;
329       napi_create_int32(env, -1, &error_ret);
330       // Process the input data.
331       size_t argc = 2;
332       napi_value argv[2] = {nullptr};
333       napi_get_cb_info(env, info, &argc, argv, nullptr, nullptr);
334       bool isArray = false;
335       napi_is_array(env, argv[0], &isArray);
336       uint32_t length = 0;
337       // Obtain the length of the array.
338       napi_get_array_length(env, argv[0], &length);
339   	LOGI("MS_LITE_LOG: argv array length = %{public}d", length);
340       std::vector<float> input_data;
341       double param = 0;
342       for (int i = 0; i < length; i++) {
343           napi_value value;
344           napi_get_element(env, argv[0], i, &value);
345           napi_get_value_double(env, value, &param);
346           input_data.push_back(static_cast<float>(param));
347       }
348       std::stringstream outstr;
349       for (int i = 0; i < K_NUM_PRINT_OF_OUT_DATA; i++) {
350           outstr << input_data[i] << " ";
351       }
352   	LOGI("MS_LITE_LOG: input_data = %{public}s", outstr.str().c_str());
353       // Read model file
354       const std::string modelName = "mobilenetv2.ms";
355       LOGI("MS_LITE_LOG: Run model: %{public}s", modelName.c_str());
356       size_t modelSize;
357       auto resourcesManager = OH_ResourceManager_InitNativeResourceManager(env, argv[1]);
358       auto modelBuffer = ReadModelFile(resourcesManager, modelName, &modelSize);
359       if (modelBuffer == nullptr) {
360           LOGE("MS_LITE_ERR: Read model failed");
361           return error_ret;
362       }
363       LOGI("MS_LITE_LOG: Read model file success");
364
365       auto context = CreateMSLiteContext(modelBuffer);
366       if (context == nullptr) {
367           LOGE("MS_LITE_ERR: MSLiteFwk Build context failed.\n");
368           return error_ret;
369       }
370       auto model = CreateMSLiteModel(modelBuffer, modelSize, context);
371       if (model == nullptr) {
372           OH_AI_ContextDestroy(&context);
373           LOGE("MS_LITE_ERR: MSLiteFwk Build model failed.\n");
374           return error_ret;
375       }
376       int ret = RunMSLiteModel(model, input_data);
377       if (ret != OH_AI_STATUS_SUCCESS) {
378           OH_AI_ModelDestroy(&model);
379           OH_AI_ContextDestroy(&context);
380           LOGE("MS_LITE_ERR: RunMSLiteModel failed.\n");
381           return error_ret;
382       }
383       napi_value out_data;
384       napi_create_array(env, &out_data);
385       auto outputs = OH_AI_ModelGetOutputs(model);
386       OH_AI_TensorHandle output_0 = outputs.handle_list[0];
387       float *output0Data = reinterpret_cast<float *>(OH_AI_TensorGetMutableData(output_0));
388       for (size_t i = 0; i < OH_AI_TensorGetElementNum(output_0); i++) {
389           napi_value element;
390           napi_create_double(env, static_cast<double>(output0Data[i]), &element);
391           napi_set_element(env, out_data, i, element);
392       }
393       OH_AI_ModelDestroy(&model);
394       OH_AI_ContextDestroy(&context);
395       LOGI("MS_LITE_LOG: Exit runDemo()");
396       return out_data;
397   }
398   ```
399
4006. Write the **CMake** script to link the MindSpore Lite dynamic library.
401
402   ```c++
403   # the minimum version of CMake.
404   cmake_minimum_required(VERSION 3.4.1)
405   project(MindSporeLiteCDemo)
406
407   set(NATIVERENDER_ROOT_PATH ${CMAKE_CURRENT_SOURCE_DIR})
408
409   if(DEFINED PACKAGE_FIND_FILE)
410       include(${PACKAGE_FIND_FILE})
411   endif()
412
413   include_directories(${NATIVERENDER_ROOT_PATH}
414                       ${NATIVERENDER_ROOT_PATH}/include)
415
416   add_library(entry SHARED mslite_napi.cpp)
417   target_link_libraries(entry PUBLIC mindspore_lite_ndk)
418   target_link_libraries(entry PUBLIC hilog_ndk.z)
419   target_link_libraries(entry PUBLIC rawfile.z)
420   target_link_libraries(entry PUBLIC ace_napi.z)
421   ```
422
423#### Use N-APIs to encapsulate the C++ dynamic library into an ArkTS module.
424
4251. In **entry/src/main/cpp/types/libentry/Index.d.ts**, define the ArkTS API **runDemo ()**. The content is as follows:
426
427   ```ts
428   export const runDemo: (a: number[], b:Object) => Array<number>;
429   ```
430
4312. In the **oh-package.json5** file, associate the API with the .so file to form a complete ArkTS module.
432
433   ```json
434   {
435     "name": "libentry.so",
436     "types": "./Index.d.ts",
437     "version": "1.0.0",
438     "description": "MindSpore Lite inference module"
439   }
440   ```
441
442#### Invoke the encapsulated ArkTS module for inference and output the result.
443
444In **entry/src/main/ets/pages/Index.ets**, call the encapsulated ArkTS module to process the inference result.
445
446```ts
447// Index.ets
448import msliteNapi from 'libentry.so'
449
450@Entry
451@Component
452struct Index {
453  @State modelInputHeight: number = 224;
454  @State modelInputWidth: number = 224;
455  @State max: number = 0;
456  @State maxIndex: number = 0;
457  @State maxArray: Array<number> = [];
458  @State maxIndexArray: Array<number> = [];
459
460  build() {
461    Row() {
462      Column() {
463        Button() {
464          Text('photo')
465            .fontSize(30)
466            .fontWeight(FontWeight.Bold)
467        }
468        .type(ButtonType.Capsule)
469        .margin({
470          top: 20
471        })
472        .backgroundColor('#0D9FFB')
473        .width('40%')
474        .height('5%')
475        .onClick(() => {
476          let resMgr = this.getUIContext()?.getHostContext()?.getApplicationContext().resourceManager;
477          let float32View = new Float32Array(this.modelInputHeight * this.modelInputWidth * 3);
478          // Image input and preprocessing
479          // Call the C++ runDemo function. The buffer data of the input image is stored in float32View after preprocessing. For details, see Image Input and Preprocessing.
480          console.info('MS_LITE_LOG: *** Start MSLite Demo ***');
481          let output: Array<number> = msliteNapi.runDemo(Array.from(float32View), resMgr);
482
483          // Obtain the maximum number of categories.
484          this.max = 0;
485          this.maxIndex = 0;
486          this.maxArray = [];
487          this.maxIndexArray = [];
488          let newArray = output.filter(value => value !== this.max);
489          for (let n = 0; n < 5; n++) {
490            this.max = output[0];
491            this.maxIndex = 0;
492            for (let m = 0; m < newArray.length; m++) {
493              if (newArray[m] > this.max) {
494                this.max = newArray[m];
495                this.maxIndex = m;
496              }
497            }
498            this.maxArray.push(Math.round(this.max * 10000));
499            this.maxIndexArray.push(this.maxIndex);
500            // Call the array filter function.
501            newArray = newArray.filter(value => value !== this.max);
502          }
503          console.info('MS_LITE_LOG: max:' + this.maxArray);
504          console.info('MS_LITE_LOG: maxIndex:' + this.maxIndexArray);
505          console.info('MS_LITE_LOG: *** Finished MSLite Demo ***');
506        })
507      }.width('100%')
508    }
509    .height('100%')
510  }
511}
512```
513
514### Debugging and Verification
515
5161. On DevEco Studio, connect to the device, click **Run entry**, and build your own HAP.
517
518   ```shell
519   Launching com.samples.mindsporelitecdemo
520   $ hdc shell aa force-stop com.samples.mindsporelitecdemo
521   $ hdc shell mkdir data/local/tmp/xxx
522   $ hdc file send C:\Users\xxx\MindSporeLiteCDemo\entry\build\default\outputs\default\entry-default-signed.hap "data/local/tmp/xxx"
523   $ hdc shell bm install -p data/local/tmp/xxx
524   $ hdc shell rm -rf data/local/tmp/xxx
525   $ hdc shell aa start -a EntryAbility -b com.samples.mindsporelitecdemo
526   ```
527
5282. Touch the **photo** button on the device screen, select an image, and touch **OK**. The classification result of the selected image is displayed on the device screen. In the log printing result, filter images by the keyword **MS_LITE**. The following information is displayed:
529
530   ```verilog
531   08-05 17:15:52.001   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: PhotoViewPicker.select successfully, photoSelectResult uri: {"photoUris":["file://media/Photo/13/IMG_1501955351_012/plant.jpg"]}
532   ...
533   08-05 17:15:52.627   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: crop info.width = 224
534   08-05 17:15:52.627   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: crop info.height = 224
535   08-05 17:15:52.628   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: Succeeded in reading image pixel data, buffer: 200704
536   08-05 17:15:52.971   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: float32View data: float32View data: 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143
537   08-05 17:15:52.971   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: *** Start MSLite Demo ***
538   08-05 17:15:53.454   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: Build MSLite model success.
539   08-05 17:15:53.753   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: Run MSLite model Predict success.
540   08-05 17:15:53.753   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: Get model outputs:
541   08-05 17:15:53.753   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: - Tensor 0 name is: Default/head-MobileNetV2Head/Sigmoid-op466.
542   08-05 17:15:53.753   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: - Tensor data is:
543   08-05 17:15:53.753   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: 3.43385e-06 1.40285e-05 9.11969e-07 4.91007e-05 9.50266e-07 3.94537e-07 0.0434676 3.97196e-05 0.00054832 0.000246202 1.576e-05 3.6494e-06 1.23553e-05 0.196977 5.3028e-05 3.29346e-05 4.90475e-07 1.66109e-06 7.03273e-06 8.83677e-07 3.1365e-06
544   08-05 17:15:53.781   4684-4684    A03d00/JSAPP                   pid-4684              W     MS_LITE_WARN: output length =  500 ;value =  0.0000034338463592575863,0.000014028532859811094,9.119685273617506e-7,0.000049100715841632336,9.502661555416125e-7,3.945370394831116e-7,0.04346757382154465,0.00003971960904891603,0.0005483203567564487,0.00024620210751891136,0.000015759984307806008,0.0000036493988773145247,0.00001235533181898063,0.1969769448041916,0.000053027983085485175,0.000032934600312728435,4.904751449430478e-7,0.0000016610861166554969,0.000007032729172351537,8.836767619868624e-7
545   08-05 17:15:53.831   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: max:9497,7756,1970,435,46
546   08-05 17:15:53.831   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: maxIndex:323,46,13,6,349
547   08-05 17:15:53.831   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: *** Finished MSLite Demo ***
548   ```
549
550
551### Effects
552
553Touch the **photo** button on the device screen, select an image, and touch **OK**. The top 4 categories of the image are displayed below the image.
554
555![stepc1](figures/stepc1.png)           ![step2](figures/step2.png)
556
557![step3](figures/step3.png)         ![stepc4](figures/stepc4.png)
558
559
560