• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Using MindSpore Lite for Image Classification (ArkTS)
2
3## When to Use
4
5You can use [@ohos.ai.mindSporeLite](../../reference/apis-mindspore-lite-kit/js-apis-mindSporeLite.md) to quickly deploy AI algorithms into your application to perform AI model inference for image classification.
6
7Image classification can be used to recognize objects in images and is widely used in medical image analysis, auto driving, e-commerce, and facial recognition.
8
9## Basic Concepts
10
11Before getting started, you need to understand the following basic concepts:
12
13**Tensor**: a special data structure that is similar to an array or matrix. It is the basic data structure used in MindSpore Lite network operations.
14
15**Float16 inference mode**: an inference mode in half-precision format, where a number is represented with 16 bits.
16
17## **Available APIs**
18
19APIs involved in MindSpore Lite model inference are categorized into context APIs, model APIs, and tensor APIs. For details about APIs, see [@ohos.ai.mindSporeLite](../../reference/apis-mindspore-lite-kit/js-apis-mindSporeLite.md).
20
21| API                                                      | Description            |
22| ------------------------------------------------------------ | ---------------- |
23| loadModelFromFile(model: string, context?: Context): Promise<Model> | Loads a model from a file.|
24| getInputs(): MSTensor[]                                      | Obtains the model input.|
25| predict(inputs: MSTensor[]): Promise<MSTensor[]>       | Performs model inference.      |
26| getData(): ArrayBuffer                                       | Obtains tensor data.|
27| setData(inputArray: ArrayBuffer): void                       | Sets tensor data.|
28
29## Development Process
30
311. Select an image classification model.
322. Use the MindSpore Lite inference model on the device to classify the selected images.
33
34## Environment Setup
35
36Install DevEco Studio 4.1 or later, and update the SDK to API version 11 or later.
37
38## How to Develop
39
40The following uses inference on an image in the album as an example to describe how to use MindSpore Lite to implement image classification.
41
42### Selecting a Model
43
44This sample application uses [mobilenetv2.ms](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/1.5/mobilenetv2.ms) as the image classification model. The model file is available in the **entry/src/main/resources/rawfile** project directory.
45
46If you have other pre-trained models for image classification, convert the original model into the .ms format by referring to [Using MindSpore Lite for Model Conversion](mindspore-lite-converter-guidelines.md).
47
48### Writing Code
49
50#### Image Input and Preprocessing
51
521. Call [@ohos.file.picker](../../reference/apis-core-file-kit/js-apis-file-picker.md) to pick up the desired image in the album.
53
542. Based on the input image size, call [[@ohos.multimedia.image](../../reference/apis-image-kit/arkts-apis-image.md) and [@ohos.file.fs](../../reference/apis-core-file-kit/js-apis-file-fs.md) to perform operations such as cropping the image, obtaining the image buffer, and standardizing the image.
55
56   ```ts
57   // Index.ets
58   import { photoAccessHelper } from '@kit.MediaLibraryKit';
59   import { BusinessError } from '@kit.BasicServicesKit';
60   import { image } from '@kit.ImageKit';
61   import { fileIo } from '@kit.CoreFileKit';
62
63   @Entry
64   @Component
65   struct Index {
66     @State modelName: string = 'mobilenetv2.ms';
67     @State modelInputHeight: number = 224;
68     @State modelInputWidth: number = 224;
69     @State uris: Array<string> = [];
70
71     build() {
72       Row() {
73         Column() {
74           Button() {
75             Text('photo')
76               .fontSize(30)
77               .fontWeight(FontWeight.Bold)
78           }
79           .type(ButtonType.Capsule)
80           .margin({
81             top: 20
82           })
83           .backgroundColor('#0D9FFB')
84           .width('40%')
85           .height('5%')
86           .onClick(() => {
87             let resMgr = this.getUIContext()?.getHostContext()?.getApplicationContext().resourceManager;
88             resMgr?.getRawFileContent(this.modelName).then(modelBuffer => {
89               // Obtain images in an album.
90               // 1. Create an image picker instance.
91               let photoSelectOptions = new photoAccessHelper.PhotoSelectOptions();
92
93               // 2. Set the media file type to IMAGE and set the maximum number of media files that can be selected.
94               photoSelectOptions.MIMEType = photoAccessHelper.PhotoViewMIMETypes.IMAGE_TYPE;
95               photoSelectOptions.maxSelectNumber = 1;
96
97               // 3. Create an album picker instance and call select() to open the album page for file selection. After file selection is done, the result set is returned through photoSelectResult.
98               let photoPicker = new photoAccessHelper.PhotoViewPicker();
99               photoPicker.select(photoSelectOptions, async (
100                 err: BusinessError, photoSelectResult: photoAccessHelper.PhotoSelectResult) => {
101                 if (err) {
102                   console.error('MS_LITE_ERR: PhotoViewPicker.select failed with err: ' + JSON.stringify(err));
103                   return;
104                 }
105                 console.info('MS_LITE_LOG: PhotoViewPicker.select successfully, ' +
106                   'photoSelectResult uri: ' + JSON.stringify(photoSelectResult));
107                 this.uris = photoSelectResult.photoUris;
108                 console.info('MS_LITE_LOG: uri: ' + this.uris);
109
110                 // Preprocess the image data.
111                 try {
112                   // 1. Based on the specified URI, call fileIo.openSync to open the file to obtain the FD.
113                   let file = fileIo.openSync(this.uris[0], fileIo.OpenMode.READ_ONLY);
114                   console.info('MS_LITE_LOG: file fd: ' + file.fd);
115
116                   // 2. Based on the FD, call fileIo.readSync to read the data in the file.
117                   let inputBuffer = new ArrayBuffer(4096000);
118                   let readLen = fileIo.readSync(file.fd, inputBuffer);
119                   console.info('MS_LITE_LOG: readSync data to file succeed and inputBuffer size is:' + readLen);
120
121                   // 3. Perform image preprocessing through PixelMap.
122                   let imageSource = image.createImageSource(file.fd);
123                   imageSource.createPixelMap().then((pixelMap) => {
124                     pixelMap.getImageInfo().then((info) => {
125                       console.info('MS_LITE_LOG: info.width = ' + info.size.width);
126                       console.info('MS_LITE_LOG: info.height = ' + info.size.height);
127                       // 4. Crop the image based on the input image size and obtain the image buffer readBuffer.
128                       pixelMap.scale(256.0 / info.size.width, 256.0 / info.size.height).then(() => {
129                         pixelMap.crop(
130                           { x: 16, y: 16, size: { height: this.modelInputHeight, width: this.modelInputWidth } }
131                         ).then(async () => {
132                           let info = await pixelMap.getImageInfo();
133                           console.info('MS_LITE_LOG: crop info.width = ' + info.size.width);
134                           console.info('MS_LITE_LOG: crop info.height = ' + info.size.height);
135                           // Set the size of readBuffer.
136                           let readBuffer = new ArrayBuffer(this.modelInputHeight * this.modelInputWidth * 4);
137                           await pixelMap.readPixelsToBuffer(readBuffer);
138                           console.info('MS_LITE_LOG: Succeeded in reading image pixel data, buffer: ' +
139                           readBuffer.byteLength);
140                           // Convert readBuffer to the float32 format, and standardize the image.
141                           const imageArr = new Uint8Array(
142                             readBuffer.slice(0, this.modelInputHeight * this.modelInputWidth * 4));
143                           console.info('MS_LITE_LOG: imageArr length: ' + imageArr.length);
144                           let means = [0.485, 0.456, 0.406];
145                           let stds = [0.229, 0.224, 0.225];
146                           let float32View = new Float32Array(this.modelInputHeight * this.modelInputWidth * 3);
147                           let index = 0;
148                           for (let i = 0; i < imageArr.length; i++) {
149                             if ((i + 1) % 4 == 0) {
150                               float32View[index] = (imageArr[i - 3] / 255.0 - means[0]) / stds[0]; // B
151                               float32View[index+1] = (imageArr[i - 2] / 255.0 - means[1]) / stds[1]; // G
152                               float32View[index+2] = (imageArr[i - 1] / 255.0 - means[2]) / stds[2]; // R
153                               index += 3;
154                             }
155                           }
156                           console.info('MS_LITE_LOG: float32View length: ' + float32View.length);
157                           let printStr = 'float32View data:';
158                           for (let i = 0; i < 20; i++) {
159                             printStr += ' ' + float32View[i];
160                           }
161                           console.info('MS_LITE_LOG: float32View data: ' + printStr);
162                         })
163                       })
164                     })
165                   })
166                 } catch (err) {
167                   console.error('MS_LITE_LOG: uri: open file fd failed.' + err);
168                 }
169               })
170             })
171           })
172         }
173         .width('100%')
174       }
175       .height('100%')
176     }
177   }
178   ```
179
180#### Writing Inference Code
181
1821. If the capability set defined by the project does not contain MindSpore Lite, create the **syscap.json** file in the **entry/src/main** directory of the DevEco Studio project. The file content is as follows:
183
184   ```json
185   {
186     "devices": {
187       "general": [
188         // The value must be the same as the value of deviceTypes in the module.json5 file.
189         "default"
190       ]
191     },
192     "development": {
193       "addedSysCaps": [
194         "SystemCapability.AI.MindSporeLite"
195       ]
196     }
197   }
198   ```
199
2002. Call [@ohos.ai.mindSporeLite](../../reference/apis-mindspore-lite-kit/js-apis-mindSporeLite.md) to implement inference on the device. The operation process is as follows:
201
202   1. Create a context, and set parameters such as the number of runtime threads and device type. The sample model does not support NNRt inference.
203   2. Load the model. In this example, the model is loaded from the memory.
204   3. Load data. Before executing a model, you need to obtain the model input and then fill data in the input tensors.
205   4. Perform model inference through the **predict** API.
206
207   ```ts
208   // model.ets
209   import { mindSporeLite } from '@kit.MindSporeLiteKit'
210
211   export default async function modelPredict(
212     modelBuffer: ArrayBuffer, inputsBuffer: ArrayBuffer[]): Promise<mindSporeLite.MSTensor[]> {
213
214     // 1. Create a context, and set parameters such as the number of runtime threads and device type. The context.target = ["nnrt"] option is not supported.
215     let context: mindSporeLite.Context = {};
216     context.target = ['cpu'];
217     context.cpu = {}
218     context.cpu.threadNum = 2;
219     context.cpu.threadAffinityMode = 1;
220     context.cpu.precisionMode = 'enforce_fp32';
221
222     // 2. Load the model from the memory.
223     let msLiteModel: mindSporeLite.Model = await mindSporeLite.loadModelFromBuffer(modelBuffer, context);
224
225     // 3. Set the input data.
226     let modelInputs: mindSporeLite.MSTensor[] = msLiteModel.getInputs();
227     for (let i = 0; i < inputsBuffer.length; i++) {
228       let inputBuffer = inputsBuffer[i];
229       if (inputBuffer != null) {
230         modelInputs[i].setData(inputBuffer as ArrayBuffer);
231       }
232     }
233
234     // 4. Perform inference.
235     console.info('=========MS_LITE_LOG: MS_LITE predict start=====');
236     let modelOutputs: mindSporeLite.MSTensor[] = await msLiteModel.predict(modelInputs);
237     return modelOutputs;
238   }
239   ```
240
241#### Executing Inference
242
243Load the model file and call the inference function to perform inference on the selected image, and process the inference result.
244
245```ts
246// Index.ets
247import modelPredict from './model';
248
249@Entry
250@Component
251struct Index {
252  @State modelName: string = 'mobilenetv2.ms';
253  @State modelInputHeight: number = 224;
254  @State modelInputWidth: number = 224;
255  @State max: number = 0;
256  @State maxIndex: number = 0;
257  @State maxArray: Array<number> = [];
258  @State maxIndexArray: Array<number> = [];
259
260  build() {
261    Row() {
262      Column() {
263        Button() {
264          Text('photo')
265            .fontSize(30)
266            .fontWeight(FontWeight.Bold)
267        }
268        .type(ButtonType.Capsule)
269        .margin({
270          top: 20
271        })
272        .backgroundColor('#0D9FFB')
273        .width('40%')
274        .height('5%')
275        .onClick(() => {
276          let resMgr = this.getUIContext()?.getHostContext()?.getApplicationContext().resourceManager;
277          resMgr?.getRawFileContent(this.modelName).then(modelBuffer => {
278            // Image input and preprocessing
279            // The buffer data of the input image is stored in float32View after preprocessing. For details, see Image Input and Preprocessing.
280            let inputs: ArrayBuffer[] = [float32View.buffer];
281            // predict
282            modelPredict(modelBuffer.buffer.slice(0), inputs).then(outputs => {
283              console.info('=========MS_LITE_LOG: MS_LITE predict success=====');
284              // Print the result.
285              for (let i = 0; i < outputs.length; i++) {
286                let out = new Float32Array(outputs[i].getData());
287                let printStr = outputs[i].name + ':';
288                for (let j = 0; j < out.length; j++) {
289                  printStr += out[j].toString() + ',';
290                }
291                console.info('MS_LITE_LOG: ' + printStr);
292                // Obtain the maximum number of categories.
293                this.max = 0;
294                this.maxIndex = 0;
295                this.maxArray = [];
296                this.maxIndexArray = [];
297                let newArray = out.filter(value => value !== this.max)
298                for (let n = 0; n < 5; n++) {
299                  this.max = out[0];
300                  this.maxIndex = 0;
301                  for (let m = 0; m < newArray.length; m++) {
302                    if (newArray[m] > this.max) {
303                      this.max = newArray[m];
304                      this.maxIndex = m;
305                    }
306                  }
307                  this.maxArray.push(Math.round(this.max * 10000))
308                  this.maxIndexArray.push(this.maxIndex)
309                  // Call the array filter function.
310                  newArray = newArray.filter(value => value !== this.max)
311                }
312                console.info('MS_LITE_LOG: max:' + this.maxArray);
313                console.info('MS_LITE_LOG: maxIndex:' + this.maxIndexArray);
314              }
315              console.info('=========MS_LITE_LOG END=========');
316            })
317          })
318        })
319      }
320      .width('100%')
321    }
322    .height('100%')
323  }
324}
325```
326
327### Debugging and Verification
328
3291. On DevEco Studio, connect to the device, click **Run entry**, and build your own HAP.
330
331   ```shell
332   Launching com.samples.mindsporelitearktsdemo
333   $ hdc shell aa force-stop com.samples.mindsporelitearktsdemo
334   $ hdc shell mkdir data/local/tmp/xxx
335   $ hdc file send C:\Users\xxx\MindSporeLiteArkTSDemo\entry\build\default\outputs\default\entry-default-signed.hap "data/local/tmp/xxx"
336   $ hdc shell bm install -p data/local/tmp/xxx
337   $ hdc shell rm -rf data/local/tmp/xxx
338   $ hdc shell aa start -a EntryAbility -b com.samples.mindsporelitearktsdemo
339   ```
340
3412. Touch the **photo** button on the device screen, select an image, and touch **OK**. The classification result of the selected image is displayed on the device screen. In the log printing result, filter images by the keyword **MS_LITE**. The following information is displayed:
342
343   ```verilog
344   08-06 03:24:33.743   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: PhotoViewPicker.select successfully, photoSelectResult uri: {"photoUris":["file://media/Photo/13/IMG_1501955351_012/plant.jpg"]}
345   08-06 03:24:33.795   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: readSync data to file succeed and inputBuffer size is:32824
346   08-06 03:24:34.147   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: crop info.width = 224
347   08-06 03:24:34.147   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: crop info.height = 224
348   08-06 03:24:34.160   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: Succeeded in reading image pixel data, buffer: 200704
349   08-06 03:24:34.970   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     =========MS_LITE_LOG: MS_LITE predict start=====
350   08-06 03:24:35.432   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     =========MS_LITE_LOG: MS_LITE predict success=====
351   08-06 03:24:35.447   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: Default/head-MobileNetV2Head/Sigmoid-op466:0.0000034338463592575863,0.000014028532859811094,9.119685273617506e-7,0.000049100715841632336,9.502661555416125e-7,3.945370394831116e-7,0.04346757382154465,0.00003971960904891603...
352   08-06 03:24:35.499   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: max:9497,7756,1970,435,46
353   08-06 03:24:35.499   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     MS_LITE_LOG: maxIndex:323,46,13,6,349
354   08-06 03:24:35.499   22547-22547  A03d00/JSAPP                   com.sampl...liteark+  I     =========MS_LITE_LOG END=========
355   ```
356
357### Effects
358
359Touch the **photo** button on the device screen, select an image, and touch **OK**. The top 4 categories of the image are displayed below the image.
360
361![step1](figures/step1.png)         ![step2](figures/step2.png)
362
363![step3](figures/step3.png)         ![step4](figures/step4.png)
364
365
366<!--RP1--><!--RP1End-->
367