• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# 使用MindSpore Lite实现图像分类(C/C++)
2
3<!--Kit: MindSpore Lite Kit-->
4<!--Subsystem: AI-->
5<!--Owner: @zhuguodong8-->
6<!--Designer: @zhuguodong8; @jjfeing-->
7<!--Tester: @principal87-->
8<!--Adviser: @ge-yafang-->
9
10## 场景说明
11
12开发者可以使用[MindSpore](../../reference/apis-mindspore-lite-kit/capi-mindspore.md),在UI代码中直接集成MindSpore Lite能力,快速部署AI算法,进行AI模型推理,实现图像分类的应用。
13
14图像分类可实现对图像中物体的识别,在医学影像分析、自动驾驶、电子商务、人脸识别等有广泛的应用。
15
16## 基本概念
17
18- N-API:用于构建ArkTS本地化组件的一套接口。可利用N-API,将C/C++开发的库封装成ArkTS模块。
19
20## 开发流程
21
221. 选择图像分类模型。
232. 在端侧使用MindSpore Lite推理模型,实现对选择的图片进行分类。
24
25## 环境准备
26
27安装DevEco Studio,要求版本 >= 4.1,并更新SDK到API 11或以上。
28
29## 开发步骤
30
31本文以对相册的一张图片进行推理为例,提供使用MindSpore Lite实现图像分类的开发指导。
32
33### 选择模型
34
35本示例程序中使用的图像分类模型文件为[mobilenetv2.ms](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/1.5/mobilenetv2.ms),放置在entry/src/main/resources/rawfile工程目录下。
36
37如果开发者有其他图像分类的预训练模型,请参考[MindSpore Lite 模型转换](mindspore-lite-converter-guidelines.md)介绍,将原始模型转换成.ms格式。
38
39### 编写图像输入和预处理代码
40
411. 此处以获取相册图片为例,调用[@ohos.file.picker](../../reference/apis-core-file-kit/js-apis-file-picker.md) 实现相册图片文件的选择。
42
432. 根据模型的输入尺寸,调用[@ohos.multimedia.image](../../reference/apis-image-kit/arkts-apis-image.md) (实现图片处理)、[@ohos.file.fs](../../reference/apis-core-file-kit/js-apis-file-fs.md) (实现基础文件操作) API对选择图片进行裁剪、获取图片buffer数据,并进行标准化处理。
44
45   ```ts
46   // Index.ets
47   import { fileIo } from '@kit.CoreFileKit';
48   import { photoAccessHelper } from '@kit.MediaLibraryKit';
49   import { BusinessError } from '@kit.BasicServicesKit';
50   import { image } from '@kit.ImageKit';
51
52   @Entry
53   @Component
54   struct Index {
55     @State modelName: string = 'mobilenetv2.ms';
56     @State modelInputHeight: number = 224;
57     @State modelInputWidth: number = 224;
58     @State uris: Array<string> = [];
59
60     build() {
61       Row() {
62         Column() {
63           Button() {
64             Text('photo')
65               .fontSize(30)
66               .fontWeight(FontWeight.Bold)
67           }
68           .type(ButtonType.Capsule)
69           .margin({
70             top: 20
71           })
72           .backgroundColor('#0D9FFB')
73           .width('40%')
74           .height('5%')
75           .onClick(() => {
76             // 获取相册图片
77             // 1.创建图片文件选择实例
78             let photoSelectOptions = new photoAccessHelper.PhotoSelectOptions();
79
80             // 2.设置选择媒体文件类型为IMAGE,设置选择媒体文件的最大数目
81             photoSelectOptions.MIMEType = photoAccessHelper.PhotoViewMIMETypes.IMAGE_TYPE;
82             photoSelectOptions.maxSelectNumber = 1;
83
84             // 3.创建图库选择器实例,调用select()接口拉起图库界面进行文件选择。文件选择成功后,返回photoSelectResult结果集。
85             let photoPicker = new photoAccessHelper.PhotoViewPicker();
86             photoPicker.select(photoSelectOptions,
87               async (err: BusinessError, photoSelectResult: photoAccessHelper.PhotoSelectResult) => {
88                 if (err) {
89                   console.error('MS_LITE_ERR: PhotoViewPicker.select failed with err: ' + JSON.stringify(err));
90                   return;
91                 }
92                 console.info('MS_LITE_LOG: PhotoViewPicker.select successfully, photoSelectResult uri: ' +
93                 JSON.stringify(photoSelectResult));
94                 this.uris = photoSelectResult.photoUris;
95                 console.info('MS_LITE_LOG: uri: ' + this.uris);
96                 // 预处理图片数据
97                 try {
98                   // 1.使用fileIo.openSync接口,通过uri打开这个文件得到fd
99                   let file = fileIo.openSync(this.uris[0], fileIo.OpenMode.READ_ONLY);
100                   console.info('MS_LITE_LOG: file fd: ' + file.fd);
101
102                   // 2.通过fd使用fileIo.readSync接口读取这个文件内的数据
103                   let inputBuffer = new ArrayBuffer(4096000);
104                   let readLen = fileIo.readSync(file.fd, inputBuffer);
105                   console.info('MS_LITE_LOG: readSync data to file succeed and inputBuffer size is:' + readLen);
106
107                   // 3.通过PixelMap预处理
108                   let imageSource = image.createImageSource(file.fd);
109                   if (imageSource == undefined) {
110                     console.error('MS_LITE_ERR: createImageSource failed.')
111                     return
112                   }
113                   imageSource.createPixelMap().then((pixelMap) => {
114                     pixelMap.getImageInfo().then((info) => {
115                       console.info('MS_LITE_LOG: info.width = ' + info.size.width);
116                       console.info('MS_LITE_LOG: info.height = ' + info.size.height);
117                       // 4.根据模型输入的尺寸,将图片裁剪为对应的size,获取图片buffer数据readBuffer
118                       pixelMap.scale(256.0 / info.size.width, 256.0 / info.size.height).then(() => {
119                         pixelMap.crop({
120                           x: 16,
121                           y: 16,
122                           size: { height: this.modelInputHeight, width: this.modelInputWidth }
123                         })
124                           .then(async () => {
125                             let info = await pixelMap.getImageInfo();
126                             console.info('MS_LITE_LOG: crop info.width = ' + info.size.width);
127                             console.info('MS_LITE_LOG: crop info.height = ' + info.size.height);
128                             // 需要创建的像素buffer大小
129                             let readBuffer = new ArrayBuffer(this.modelInputHeight * this.modelInputWidth * 4);
130                             await pixelMap.readPixelsToBuffer(readBuffer);
131                             console.info('MS_LITE_LOG: Succeeded in reading image pixel data, buffer: ' +
132                             readBuffer.byteLength);
133                             // 处理readBuffer,转换成float32格式,并进行标准化处理
134                             const imageArr =
135                               new Uint8Array(readBuffer.slice(0, this.modelInputHeight * this.modelInputWidth * 4));
136                             console.info('MS_LITE_LOG: imageArr length: ' + imageArr.length);
137                             let means = [0.485, 0.456, 0.406];
138                             let stds = [0.229, 0.224, 0.225];
139                             let float32View = new Float32Array(this.modelInputHeight * this.modelInputWidth * 3);
140                             let index = 0;
141                             for (let i = 0; i < imageArr.length; i++) {
142                               if ((i + 1) % 4 == 0) {
143                                 float32View[index] = (imageArr[i - 3] / 255.0 - means[0]) / stds[0]; // B
144                                 float32View[index+1] = (imageArr[i - 2] / 255.0 - means[1]) / stds[1]; // G
145                                 float32View[index+2] = (imageArr[i - 1] / 255.0 - means[2]) / stds[2]; // R
146                                 index += 3;
147                               }
148                             }
149                             console.info('MS_LITE_LOG: float32View length: ' + float32View.length);
150                             let printStr = 'float32View data:';
151                             for (let i = 0; i < 20; i++) {
152                               printStr += ' ' + float32View[i];
153                             }
154                             console.info('MS_LITE_LOG: float32View data: ' + printStr);
155                           })
156                       })
157                     })
158                   })
159                 } catch (err) {
160                   console.error('MS_LITE_LOG: uri: open file fd failed.' + err);
161                 }
162               })
163           })
164         }.width('100%')
165       }
166       .height('100%')
167     }
168   }
169   ```
170
171### 编写推理代码
172
173调用[MindSpore](../../reference/apis-mindspore-lite-kit/capi-mindspore.md)实现端侧推理,推理代码流程如下。
174
1751. 引用对应的头文件
176
177   ```c++
178   #include <iostream>
179   #include <sstream>
180   #include <stdlib.h>
181   #include <hilog/log.h>
182   #include <rawfile/raw_file_manager.h>
183   #include <mindspore/types.h>
184   #include <mindspore/model.h>
185   #include <mindspore/context.h>
186   #include <mindspore/status.h>
187   #include <mindspore/tensor.h>
188   #include "napi/native_api.h"
189   ```
190
1912. 读取模型文件
192
193   ```c++
194   #define LOGI(...) ((void)OH_LOG_Print(LOG_APP, LOG_INFO, LOG_DOMAIN, "[MSLiteNapi]", __VA_ARGS__))
195   #define LOGD(...) ((void)OH_LOG_Print(LOG_APP, LOG_DEBUG, LOG_DOMAIN, "[MSLiteNapi]", __VA_ARGS__))
196   #define LOGW(...) ((void)OH_LOG_Print(LOG_APP, LOG_WARN, LOG_DOMAIN, "[MSLiteNapi]", __VA_ARGS__))
197   #define LOGE(...) ((void)OH_LOG_Print(LOG_APP, LOG_ERROR, LOG_DOMAIN, "[MSLiteNapi]", __VA_ARGS__))
198
199   void *ReadModelFile(NativeResourceManager *nativeResourceManager, const std::string &modelName, size_t *modelSize) {
200       auto rawFile = OH_ResourceManager_OpenRawFile(nativeResourceManager, modelName.c_str());
201       if (rawFile == nullptr) {
202           LOGE("MS_LITE_ERR: Open model file failed");
203           OH_ResourceManager_CloseRawFile(rawFile);
204           return nullptr;
205       }
206       long fileSize = OH_ResourceManager_GetRawFileSize(rawFile);
207       if (fileSize <= 0) {
208           LOGE("MS_LITE_ERR: FileSize not correct");
209       }
210       void *modelBuffer = malloc(fileSize);
211       if (modelBuffer == nullptr) {
212           LOGE("MS_LITE_ERR: malloc failed");
213       }
214       int ret = OH_ResourceManager_ReadRawFile(rawFile, modelBuffer, fileSize);
215       if (ret == 0) {
216           LOGE("MS_LITE_ERR: OH_ResourceManager_ReadRawFile failed");
217           OH_ResourceManager_CloseRawFile(rawFile);
218           return nullptr;
219       }
220       OH_ResourceManager_CloseRawFile(rawFile);
221       *modelSize = fileSize;
222       return modelBuffer;
223   }
224   ```
225
2263. 创建上下文,设置线程数、设备类型等参数,并加载模型。本样例模型,不支持使用NNRt推理。
227
228   ```c++
229   void DestroyModelBuffer(void **buffer) {
230       if (buffer == nullptr) {
231           return;
232       }
233       free(*buffer);
234       *buffer = nullptr;
235   }
236
237   OH_AI_ContextHandle CreateMSLiteContext(void *modelBuffer) {
238       // Set executing context for model.
239       auto context = OH_AI_ContextCreate();
240       if (context == nullptr) {
241           DestroyModelBuffer(&modelBuffer);
242           LOGE("MS_LITE_ERR: Create MSLite context failed.\n");
243           return nullptr;
244       }
245       // 本样例模型,不支持配置OH_AI_DeviceInfoCreate(OH_AI_DEVICETYPE_NNRT)
246       auto cpu_device_info = OH_AI_DeviceInfoCreate(OH_AI_DEVICETYPE_CPU);
247
248       OH_AI_DeviceInfoSetEnableFP16(cpu_device_info, true);
249       OH_AI_ContextAddDeviceInfo(context, cpu_device_info);
250
251       LOGI("MS_LITE_LOG: Build MSLite context success.\n");
252       return context;
253   }
254
255   OH_AI_ModelHandle CreateMSLiteModel(void *modelBuffer, size_t modelSize, OH_AI_ContextHandle context) {
256       // Create model
257       auto model = OH_AI_ModelCreate();
258       if (model == nullptr) {
259           DestroyModelBuffer(&modelBuffer);
260           LOGE("MS_LITE_ERR: Allocate MSLite Model failed.\n");
261           return nullptr;
262       }
263
264       // Build model object
265       auto build_ret = OH_AI_ModelBuild(model, modelBuffer, modelSize, OH_AI_MODELTYPE_MINDIR, context);
266       DestroyModelBuffer(&modelBuffer);
267       if (build_ret != OH_AI_STATUS_SUCCESS) {
268           OH_AI_ModelDestroy(&model);
269           LOGE("MS_LITE_ERR: Build MSLite model failed.\n");
270           return nullptr;
271       }
272       LOGI("MS_LITE_LOG: Build MSLite model success.\n");
273       return model;
274   }
275   ```
276
2774. 设置模型输入数据,执行模型推理。
278
279   ```c++
280   constexpr int K_NUM_PRINT_OF_OUT_DATA = 20;
281
282   // 设置模型输入数据
283   int FillInputTensor(OH_AI_TensorHandle input, std::vector<float> input_data) {
284       if (OH_AI_TensorGetDataType(input) == OH_AI_DATATYPE_NUMBERTYPE_FLOAT32) {
285           float *data = (float *)OH_AI_TensorGetMutableData(input);
286           for (size_t i = 0; i < OH_AI_TensorGetElementNum(input); i++) {
287               data[i] = input_data[i];
288           }
289           return OH_AI_STATUS_SUCCESS;
290       } else {
291           return OH_AI_STATUS_LITE_ERROR;
292       }
293   }
294
295   // 执行模型推理
296   int RunMSLiteModel(OH_AI_ModelHandle model, std::vector<float> input_data) {
297       // Set input data for model.
298       auto inputs = OH_AI_ModelGetInputs(model);
299
300       auto ret = FillInputTensor(inputs.handle_list[0], input_data);
301       if (ret != OH_AI_STATUS_SUCCESS) {
302           LOGE("MS_LITE_ERR: RunMSLiteModel set input error.\n");
303           return OH_AI_STATUS_LITE_ERROR;
304       }
305       // Get model output.
306       auto outputs = OH_AI_ModelGetOutputs(model);
307       // Predict model.
308       auto predict_ret = OH_AI_ModelPredict(model, inputs, &outputs, nullptr, nullptr);
309       if (predict_ret != OH_AI_STATUS_SUCCESS) {
310           LOGE("MS_LITE_ERR: MSLite Predict error.\n");
311           return OH_AI_STATUS_LITE_ERROR;
312       }
313       LOGI("MS_LITE_LOG: Run MSLite model Predict success.\n");
314       // Print output tensor data.
315       LOGI("MS_LITE_LOG: Get model outputs:\n");
316       for (size_t i = 0; i < outputs.handle_num; i++) {
317           auto tensor = outputs.handle_list[i];
318           LOGI("MS_LITE_LOG: - Tensor %{public}d name is: %{public}s.\n", static_cast<int>(i),
319                OH_AI_TensorGetName(tensor));
320           LOGI("MS_LITE_LOG: - Tensor %{public}d size is: %{public}d.\n", static_cast<int>(i),
321                (int)OH_AI_TensorGetDataSize(tensor));
322           LOGI("MS_LITE_LOG: - Tensor data is:\n");
323           auto out_data = reinterpret_cast<const float *>(OH_AI_TensorGetData(tensor));
324           std::stringstream outStr;
325           for (int i = 0; (i < OH_AI_TensorGetElementNum(tensor)) && (i <= K_NUM_PRINT_OF_OUT_DATA); i++) {
326               outStr << out_data[i] << " ";
327           }
328           LOGI("MS_LITE_LOG: %{public}s", outStr.str().c_str());
329       }
330       return OH_AI_STATUS_SUCCESS;
331   }
332   ```
333
3345. 调用以上方法,实现完整的模型推理流程。
335
336   ```c++
337   static napi_value RunDemo(napi_env env, napi_callback_info info) {
338       LOGI("MS_LITE_LOG: Enter runDemo()");
339       napi_value error_ret;
340       napi_create_int32(env, -1, &error_ret);
341       // 传入数据处理
342       size_t argc = 2;
343       napi_value argv[2] = {nullptr};
344       napi_get_cb_info(env, info, &argc, argv, nullptr, nullptr);
345       bool isArray = false;
346       napi_is_array(env, argv[0], &isArray);
347       uint32_t length = 0;
348       // 获取数组的长度
349       napi_get_array_length(env, argv[0], &length);
350   	LOGI("MS_LITE_LOG: argv array length = %{public}d", length);
351       std::vector<float> input_data;
352       double param = 0;
353       for (int i = 0; i < length; i++) {
354           napi_value value;
355           napi_get_element(env, argv[0], i, &value);
356           napi_get_value_double(env, value, &param);
357           input_data.push_back(static_cast<float>(param));
358       }
359       std::stringstream outstr;
360       for (int i = 0; i < K_NUM_PRINT_OF_OUT_DATA; i++) {
361           outstr << input_data[i] << " ";
362       }
363   	LOGI("MS_LITE_LOG: input_data = %{public}s", outstr.str().c_str());
364       // Read model file
365       const std::string modelName = "mobilenetv2.ms";
366       LOGI("MS_LITE_LOG: Run model: %{public}s", modelName.c_str());
367       size_t modelSize;
368       auto resourcesManager = OH_ResourceManager_InitNativeResourceManager(env, argv[1]);
369       auto modelBuffer = ReadModelFile(resourcesManager, modelName, &modelSize);
370       if (modelBuffer == nullptr) {
371           LOGE("MS_LITE_ERR: Read model failed");
372           return error_ret;
373       }
374       LOGI("MS_LITE_LOG: Read model file success");
375
376       auto context = CreateMSLiteContext(modelBuffer);
377       if (context == nullptr) {
378           LOGE("MS_LITE_ERR: MSLiteFwk Build context failed.\n");
379           return error_ret;
380       }
381       auto model = CreateMSLiteModel(modelBuffer, modelSize, context);
382       if (model == nullptr) {
383           OH_AI_ContextDestroy(&context);
384           LOGE("MS_LITE_ERR: MSLiteFwk Build model failed.\n");
385           return error_ret;
386       }
387       int ret = RunMSLiteModel(model, input_data);
388       if (ret != OH_AI_STATUS_SUCCESS) {
389           OH_AI_ModelDestroy(&model);
390           OH_AI_ContextDestroy(&context);
391           LOGE("MS_LITE_ERR: RunMSLiteModel failed.\n");
392           return error_ret;
393       }
394       napi_value out_data;
395       napi_create_array(env, &out_data);
396       auto outputs = OH_AI_ModelGetOutputs(model);
397       OH_AI_TensorHandle output_0 = outputs.handle_list[0];
398       float *output0Data = reinterpret_cast<float *>(OH_AI_TensorGetMutableData(output_0));
399       for (size_t i = 0; i < OH_AI_TensorGetElementNum(output_0); i++) {
400           napi_value element;
401           napi_create_double(env, static_cast<double>(output0Data[i]), &element);
402           napi_set_element(env, out_data, i, element);
403       }
404       OH_AI_ModelDestroy(&model);
405       OH_AI_ContextDestroy(&context);
406       LOGI("MS_LITE_LOG: Exit runDemo()");
407       return out_data;
408   }
409   ```
410
4116. 编写CMake脚本,链接MindSpore Lite动态库。
412
413   ```c++
414   # the minimum version of CMake.
415   cmake_minimum_required(VERSION 3.4.1)
416   project(MindSporeLiteCDemo)
417
418   set(NATIVERENDER_PATH ${CMAKE_CURRENT_SOURCE_DIR})
419
420   if(DEFINED PACKAGE_FIND_FILE)
421       include(${PACKAGE_FIND_FILE})
422   endif()
423
424   include_directories(${NATIVERENDER_PATH}
425                       ${NATIVERENDER_PATH}/include)
426
427   add_library(entry SHARED mslite_napi.cpp)
428   target_link_libraries(entry PUBLIC mindspore_lite_ndk)
429   target_link_libraries(entry PUBLIC hilog_ndk.z)
430   target_link_libraries(entry PUBLIC rawfile.z)
431   target_link_libraries(entry PUBLIC ace_napi.z)
432   ```
433
434### 使用N-API将C++动态库封装成ArkTS模块
435
4361. 在 entry/src/main/cpp/types/libentry/Index.d.ts,定义ArkTS接口`runDemo()` 。内容如下:
437
438   ```ts
439   export const runDemo: (a: number[], b:Object) => Array<number>;
440   ```
441
4422. 在 oh-package.json5 文件,将API与so相关联,成为一个完整的ArkTS模块:
443
444   ```json
445   {
446     "name": "libentry.so",
447     "types": "./Index.d.ts",
448     "version": "1.0.0",
449     "description": "MindSpore Lite inference module"
450   }
451   ```
452
453### 调用封装的ArkTS模块进行推理并输出结果
454
455entry/src/main/ets/pages/Index.ets 中,调用封装的ArkTS模块,最后对推理结果进行处理。
456
457```ts
458// Index.ets
459import msliteNapi from 'libentry.so'
460
461@Entry
462@Component
463struct Index {
464  @State modelInputHeight: number = 224;
465  @State modelInputWidth: number = 224;
466  @State max: number = 0;
467  @State maxIndex: number = 0;
468  @State maxArray: Array<number> = [];
469  @State maxIndexArray: Array<number> = [];
470
471  build() {
472    Row() {
473      Column() {
474        Button() {
475          Text('photo')
476            .fontSize(30)
477            .fontWeight(FontWeight.Bold)
478        }
479        .type(ButtonType.Capsule)
480        .margin({
481          top: 20
482        })
483        .backgroundColor('#0D9FFB')
484        .width('40%')
485        .height('5%')
486        .onClick(() => {
487
488          let float32View = new Float32Array(this.modelInputHeight * this.modelInputWidth * 3);
489          // 图像输入和预处理。
490          // 调用c++的runDemo方法,完成图像输入和预处理后的buffer数据保存在float32View,具体可见上文图像输入和预处理中float32View的定义和处理。
491          console.info('MS_LITE_LOG: *** Start MSLite Demo ***');
492          let output: Array<number> = msliteNapi.runDemo(Array.from(float32View), resMgr);
493
494          // 取分类占比的最大值
495          this.max = 0;
496          this.maxIndex = 0;
497          this.maxArray = [];
498          this.maxIndexArray = [];
499          let newArray = output.filter(value => value !== this.max);
500          for (let n = 0; n < 5; n++) {
501            this.max = output[0];
502            this.maxIndex = 0;
503            for (let m = 0; m < newArray.length; m++) {
504              if (newArray[m] > this.max) {
505                this.max = newArray[m];
506                this.maxIndex = m;
507              }
508            }
509            this.maxArray.push(Math.round(this.max * 10000));
510            this.maxIndexArray.push(this.maxIndex);
511            // filter数组过滤函数
512            newArray = newArray.filter(value => value !== this.max);
513          }
514          console.info('MS_LITE_LOG: max:' + this.maxArray);
515          console.info('MS_LITE_LOG: maxIndex:' + this.maxIndexArray);
516          console.info('MS_LITE_LOG: *** Finished MSLite Demo ***');
517        })
518      }.width('100%')
519    }
520    .height('100%')
521  }
522}
523```
524
525### 调测验证
526
5271. 在DevEco Studio中连接设备,点击Run entry,编译Hap,有如下显示:
528
529   ```shell
530   Launching com.samples.mindsporelitecdemo
531   $ hdc shell aa force-stop com.samples.mindsporelitecdemo
532   $ hdc shell mkdir data/local/tmp/xxx
533   $ hdc file send C:\Users\xxx\MindSporeLiteCDemo\entry\build\default\outputs\default\entry-default-signed.hap "data/local/tmp/xxx"
534   $ hdc shell bm install -p data/local/tmp/xxx
535   $ hdc shell rm -rf data/local/tmp/xxx
536   $ hdc shell aa start -a EntryAbility -b com.samples.mindsporelitecdemo
537   ```
538
5392. 在设备屏幕点击photo按钮,选择图片,点击确定。设备屏幕显示所选图片的分类结果,在日志打印结果中,过滤关键字”MS_LITE“,可得到如下结果:
540
541   ```verilog
542   08-05 17:15:52.001   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: PhotoViewPicker.select successfully, photoSelectResult uri: {"photoUris":["file://media/Photo/13/IMG_1501955351_012/plant.jpg"]}
543   ...
544   08-05 17:15:52.627   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: crop info.width = 224
545   08-05 17:15:52.627   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: crop info.height = 224
546   08-05 17:15:52.628   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: Succeeded in reading image pixel data, buffer: 200704
547   08-05 17:15:52.971   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: float32View data: float32View data: 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143 1.4722440242767334 1.2385478019714355 1.308123230934143
548   08-05 17:15:52.971   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: *** Start MSLite Demo ***
549   08-05 17:15:53.454   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: Build MSLite model success.
550   08-05 17:15:53.753   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: Run MSLite model Predict success.
551   08-05 17:15:53.753   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: Get model outputs:
552   08-05 17:15:53.753   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: - Tensor 0 name is: Default/head-MobileNetV2Head/Sigmoid-op466.
553   08-05 17:15:53.753   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: - Tensor data is:
554   08-05 17:15:53.753   4684-4684    A00000/[MSLiteNapi]            pid-4684              I     MS_LITE_LOG: 3.43385e-06 1.40285e-05 9.11969e-07 4.91007e-05 9.50266e-07 3.94537e-07 0.0434676 3.97196e-05 0.00054832 0.000246202 1.576e-05 3.6494e-06 1.23553e-05 0.196977 5.3028e-05 3.29346e-05 4.90475e-07 1.66109e-06 7.03273e-06 8.83677e-07 3.1365e-06
555   08-05 17:15:53.781   4684-4684    A03d00/JSAPP                   pid-4684              W     MS_LITE_WARN: output length =  500 ;value =  0.0000034338463592575863,0.000014028532859811094,9.119685273617506e-7,0.000049100715841632336,9.502661555416125e-7,3.945370394831116e-7,0.04346757382154465,0.00003971960904891603,0.0005483203567564487,0.00024620210751891136,0.000015759984307806008,0.0000036493988773145247,0.00001235533181898063,0.1969769448041916,0.000053027983085485175,0.000032934600312728435,4.904751449430478e-7,0.0000016610861166554969,0.000007032729172351537,8.836767619868624e-7
556   08-05 17:15:53.831   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: max:9497,7756,1970,435,46
557   08-05 17:15:53.831   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: maxIndex:323,46,13,6,349
558   08-05 17:15:53.831   4684-4684    A03d00/JSAPP                   pid-4684              I     MS_LITE_LOG: *** Finished MSLite Demo ***
559   ```
560
561
562### 效果示意
563
564在设备上,点击photo按钮,选择相册中的一张图片,点击确定。在图片下方显示此图片占比前4的分类信息。
565
566![stepc1](figures/stepc1.png)           ![step2](figures/step2.png)
567
568![step3](figures/step3.png)         ![stepc4](figures/stepc4.png)
569
570## 相关实例
571
572针对使用MindSpore Lite进行图像分类应用的开发,有以下相关实例可供参考:
573
574- [基于Native接口的MindSpore Lite应用开发(C/C++)(API11)](https://gitcode.com/openharmony/applications_app_samples/tree/master/code/DocsSample/ApplicationModels/MindSporeLiteCDemo)
575
576