• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# @ohos.ai.mindSporeLite (On-device AI Framework)
2
3MindSpore Lite is a lightweight and high-performance on-device AI engine that provides standard model inference and training APIs and built-in universal high-performance operator libraries. It supports Neural Network Runtime Kit for a higher inference efficiency, empowering intelligent applications in all scenarios.
4
5This topic describes the model inference and training capabilities supported by the MindSpore Lite AI engine.
6
7> **NOTE**
8>
9> - The initial APIs of this module are supported since API version 10. Newly added APIs will be marked with a superscript to indicate their earliest API version. Unless otherwise stated, the MindSpore model is used in the sample code.
10>
11> - The APIs of this module can be used only in the stage model.
12
13## Modules to Import
14
15```ts
16import { mindSporeLite } from '@kit.MindSporeLiteKit';
17```
18
19## mindSporeLite.loadModelFromFile
20
21loadModelFromFile(model: string, callback: Callback<Model>): void
22
23Loads the input model from the full path for model inference. This API uses an asynchronous callback to return the result.
24
25**System capability**: SystemCapability.AI.MindSporeLite
26
27**Parameters**
28
29| Name  | Type                     | Mandatory| Description                    |
30| -------- | ------------------------- | ---- | ------------------------ |
31| model    | string                    | Yes  | Complete path of the input model.    |
32| callback | Callback<[Model](#model)> | Yes  | Callback used to return the result, which is a **Model** object.|
33
34**Example**
35
36```ts
37let modelFile : string = '/path/to/xxx.ms';
38mindSporeLite.loadModelFromFile(modelFile, (mindSporeLiteModel : mindSporeLite.Model) => {
39  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
40  console.info(modelInputs[0].name);
41})
42```
43## mindSporeLite.loadModelFromFile
44
45loadModelFromFile(model: string, context: Context, callback: Callback&lt;Model&gt;): void
46
47Loads the input model from the full path for model inference. This API uses an asynchronous callback to return the result.
48
49**System capability**: SystemCapability.AI.MindSporeLite
50
51**Parameters**
52
53| Name  | Type                               | Mandatory| Description                  |
54| -------- | ----------------------------------- | ---- | ---------------------- |
55| model    | string                              | Yes  | Complete path of the input model.  |
56| context | [Context](#context) | Yes| Configuration information of the running environment.|
57| callback | Callback<[Model](#model)> | Yes  | Callback used to return the result, which is a **Model** object.|
58
59**Example**
60
61```ts
62let context: mindSporeLite.Context = {};
63context.target = ['cpu'];
64let modelFile : string = '/path/to/xxx.ms';
65mindSporeLite.loadModelFromFile(modelFile, context, (mindSporeLiteModel : mindSporeLite.Model) => {
66  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
67  console.info(modelInputs[0].name);
68})
69```
70## mindSporeLite.loadModelFromFile
71
72loadModelFromFile(model: string, context?: Context): Promise&lt;Model&gt;
73
74Loads the input model from the full path for model inference. This API uses a promise to return the result.
75
76**System capability**: SystemCapability.AI.MindSporeLite
77
78**Parameters**
79
80| Name | Type               | Mandatory| Description                                         |
81| ------- | ------------------- | ---- | --------------------------------------------- |
82| model   | string              | Yes  | Complete path of the input model.                         |
83| context | [Context](#context) | No  | Configuration information of the running environment. By default, **CpuDevice** is used for initialization.|
84
85**Return value**
86
87| Type                     | Description                        |
88| ------------------------- | ---------------------------- |
89| Promise<[Model](#model)> | Promise used to return the result, which is a **Model** object.|
90
91**Example**
92
93```ts
94let modelFile = '/path/to/xxx.ms';
95mindSporeLite.loadModelFromFile(modelFile).then((mindSporeLiteModel : mindSporeLite.Model) => {
96  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
97  console.info(modelInputs[0].name);
98})
99```
100## mindSporeLite.loadModelFromBuffer
101
102loadModelFromBuffer(model: ArrayBuffer, callback: Callback&lt;Model&gt;): void
103
104Loads the input model from the memory for inference. This API uses an asynchronous callback to return the result.
105
106**System capability**: SystemCapability.AI.MindSporeLite
107
108**Parameters**
109
110| Name  | Type                     | Mandatory| Description                    |
111| -------- | ------------------------- | ---- | ------------------------ |
112| model    | ArrayBuffer               | Yes  | Memory that contains the input model.        |
113| callback | Callback<[Model](#model)> | Yes  | Callback used to return the result, which is a **Model** object.|
114
115**Example**
116
117```ts
118import { mindSporeLite } from '@kit.MindSporeLiteKit';
119import { common } from '@kit.AbilityKit';
120import { UIContext } from '@kit.ArkUI';
121
122let modelFile = 'xxx.ms';
123let globalContext = new UIContext().getHostContext() as common.UIAbilityContext;
124globalContext.getApplicationContext().resourceManager.getRawFileContent(modelFile).then((buffer: Uint8Array) => {
125  let modelBuffer = buffer.buffer;
126  mindSporeLite.loadModelFromBuffer(modelBuffer, (mindSporeLiteModel: mindSporeLite.Model) => {
127    let modelInputs: mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
128    console.info('MS_LITE_LOG: ' + modelInputs[0].name);
129  })
130})
131```
132## mindSporeLite.loadModelFromBuffer
133
134loadModelFromBuffer(model: ArrayBuffer, context: Context, callback: Callback&lt;Model&gt;): void
135
136Loads the input model from the memory for inference. This API uses an asynchronous callback to return the result.
137
138**System capability**: SystemCapability.AI.MindSporeLite
139
140**Parameters**
141
142| Name  | Type                               | Mandatory| Description                  |
143| -------- | ----------------------------------- | ---- | ---------------------- |
144| model    | ArrayBuffer                   | Yes  | Memory that contains the input model.|
145| context | [Context](#context) | Yes | Configuration information of the running environment.|
146| callback | Callback<[Model](#model)> | Yes  | Callback used to return the result, which is a **Model** object.|
147
148**Example**
149
150```ts
151import { mindSporeLite } from '@kit.MindSporeLiteKit';
152import { common } from '@kit.AbilityKit';
153import { UIContext } from '@kit.ArkUI';
154
155let modelFile = 'xxx.ms';
156let globalContext = new UIContext().getHostContext() as common.UIAbilityContext;
157globalContext.getApplicationContext().resourceManager.getRawFileContent(modelFile).then((buffer: Uint8Array) => {
158  let modelBuffer = buffer.buffer;
159  let context: mindSporeLite.Context = {};
160  context.target = ['cpu'];
161  mindSporeLite.loadModelFromBuffer(modelBuffer, context, (mindSporeLiteModel: mindSporeLite.Model) => {
162    let modelInputs: mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
163    console.info('MS_LITE_LOG: ' + modelInputs[0].name);
164  })
165})
166```
167## mindSporeLite.loadModelFromBuffer
168
169loadModelFromBuffer(model: ArrayBuffer, context?: Context): Promise&lt;Model&gt;
170
171Loads the input model from the memory for inference. This API uses a promise to return the result.
172
173**System capability**: SystemCapability.AI.MindSporeLite
174
175**Parameters**
176
177| Name | Type               | Mandatory| Description                                         |
178| ------- | ------------------- | ---- | --------------------------------------------- |
179| model   | ArrayBuffer         | Yes  | Memory that contains the input model.                             |
180| context | [Context](#context) | No  | Configuration information of the running environment. By default, **CpuDevice** is used for initialization.|
181
182**Return value**
183
184| Type                           | Description                        |
185| ------------------------------- | ---------------------------- |
186| Promise<[Model](#model)> | Promise used to return the result, which is a **Model** object.|
187
188**Example**
189
190```ts
191import { mindSporeLite } from '@kit.MindSporeLiteKit';
192import { common } from '@kit.AbilityKit';
193import { UIContext } from '@kit.ArkUI';
194
195let modelFile = 'xxx.ms';
196let globalContext = new UIContext().getHostContext() as common.UIAbilityContext;
197globalContext.getApplicationContext().resourceManager.getRawFileContent(modelFile).then((buffer: Uint8Array) => {
198  let modelBuffer = buffer.buffer;
199  mindSporeLite.loadModelFromBuffer(modelBuffer).then((mindSporeLiteModel: mindSporeLite.Model) => {
200    let modelInputs: mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
201    console.info('MS_LITE_LOG: ' + modelInputs[0].name);
202  })
203})
204```
205## mindSporeLite.loadModelFromFd
206
207loadModelFromFd(model: number, callback: Callback&lt;Model&gt;): void
208
209Loads the input model based on the specified file descriptor for inference. This API uses an asynchronous callback to return the result.
210
211**System capability**: SystemCapability.AI.MindSporeLite
212
213**Parameters**
214
215| Name  | Type                               | Mandatory| Description                  |
216| -------- | ----------------------------------- | ---- | ---------------------- |
217| model    | number                         | Yes  | File descriptor of the input model.|
218| callback | Callback<[Model](#model)> | Yes  | Callback used to return the result, which is a **Model** object.|
219
220**Example**
221
222```ts
223import { fileIo } from '@kit.CoreFileKit';
224let modelFile = '/path/to/xxx.ms';
225let file = fileIo.openSync(modelFile, fileIo.OpenMode.READ_ONLY);
226mindSporeLite.loadModelFromFd(file.fd, (mindSporeLiteModel : mindSporeLite.Model) => {
227  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
228  console.info(modelInputs[0].name);
229})
230```
231## mindSporeLite.loadModelFromFd
232
233loadModelFromFd(model: number, context: Context, callback: Callback&lt;Model&gt;): void
234
235Loads the input model based on the specified file descriptor for inference. This API uses an asynchronous callback to return the result.
236
237**System capability**: SystemCapability.AI.MindSporeLite
238
239**Parameters**
240
241| Name  | Type                               | Mandatory| Description                  |
242| -------- | ----------------------------------- | ---- | ---------------------- |
243| model    | number                   | Yes  | File descriptor of the input model.|
244| context | [Context](#context) | Yes | Configuration information of the running environment.|
245| callback | Callback<[Model](#model)> | Yes  | Callback used to return the result, which is a **Model** object.|
246
247**Example**
248
249```ts
250import { fileIo } from '@kit.CoreFileKit';
251let modelFile = '/path/to/xxx.ms';
252let context : mindSporeLite.Context = {};
253context.target = ['cpu'];
254let file = fileIo.openSync(modelFile, fileIo.OpenMode.READ_ONLY);
255mindSporeLite.loadModelFromFd(file.fd, context, (mindSporeLiteModel : mindSporeLite.Model) => {
256  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
257  console.info(modelInputs[0].name);
258})
259```
260## mindSporeLite.loadModelFromFd
261
262loadModelFromFd(model: number, context?: Context): Promise&lt;Model&gt;
263
264Loads the input model based on the specified file descriptor for inference. This API uses a promise to return the result.
265
266**System capability**: SystemCapability.AI.MindSporeLite
267
268**Parameters**
269
270| Name | Type               | Mandatory| Description                                         |
271| ------- | ------------------- | ---- | --------------------------------------------- |
272| model   | number              | Yes  | File descriptor of the input model.                           |
273| context | [Context](#context) | No  | Configuration information of the running environment. By default, **CpuDevice** is used for initialization.|
274
275**Return value**
276
277| Type                     | Description                        |
278| ------------------------- | ---------------------------- |
279| Promise<[Model](#model)> | Promise used to return the result, which is a **Model** object.|
280
281**Example**
282
283```ts
284import { fileIo } from '@kit.CoreFileKit';
285let modelFile = '/path/to/xxx.ms';
286let file = fileIo.openSync(modelFile, fileIo.OpenMode.READ_ONLY);
287mindSporeLite.loadModelFromFd(file.fd).then((mindSporeLiteModel: mindSporeLite.Model) => {
288  let modelInputs: mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
289  console.info(modelInputs[0].name);
290})
291```
292
293## mindSporeLite.loadTrainModelFromFile<sup>12+</sup>
294
295loadTrainModelFromFile(model: string, trainCfg?: TrainCfg, context?: Context): Promise&lt;Model&gt;
296
297Loads the training model file based on the specified path. This API uses a promise to return the result.
298
299**System capability**: SystemCapability.AI.MindSporeLite
300
301**Parameters**
302
303| Name  | Type                   | Mandatory| Description                                          |
304| -------- | ----------------------- | ---- | ---------------------------------------------- |
305| model    | string                  | Yes  | Complete path of the input model.                          |
306| trainCfg | [TrainCfg](#traincfg12) | No  | Configure model training parameters. The default value is an array of the default values of attributes in **TrainCfg**.  |
307| context  | [Context](#context)     | No  | Configuration information of the running environment. By default, **CpuDevice** is used for initialization.|
308
309**Return value**
310
311| Type                      | Description                  |
312| ------------------------ | -------------------- |
313| Promise<[Model](#model)> | Promise used to return the result, which is a **Model** object.|
314
315**Example**
316
317```ts
318let modelFile = '/path/to/xxx.ms';
319mindSporeLite.loadTrainModelFromFile(modelFile).then((mindSporeLiteModel : mindSporeLite.Model) => {
320  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
321  console.info(modelInputs[0].name);
322})
323```
324
325## mindSporeLite.loadTrainModelFromBuffer<sup>12+</sup>
326
327loadTrainModelFromBuffer(model: ArrayBuffer, trainCfg?: TrainCfg, context?: Context): Promise&lt;Model&gt;
328
329Loads a training model from the memory buffer. This API uses a promise to return the result.
330
331**System capability**: SystemCapability.AI.MindSporeLite
332
333**Parameters**
334
335| Name  | Type                   | Mandatory| Description                                         |
336| -------- | ----------------------- | ---- | --------------------------------------------- |
337| model    | ArrayBuffer             | Yes  | Memory accommodating the training model.                         |
338| trainCfg | [TrainCfg](#traincfg12) | No  | Configure model training parameters. The default value is an array of the default values of attributes in **TrainCfg**. |
339| context  | [Context](#context)     | No  | Configuration information of the running environment. By default, **CpuDevice** is used for initialization.|
340
341**Return value**
342
343| Type                      | Description                  |
344| ------------------------ | -------------------- |
345| Promise<[Model](#model)> | Promise used to return the result, which is a **Model** object.|
346
347**Example**
348
349```ts
350import { mindSporeLite } from '@kit.MindSporeLiteKit';
351import { common } from '@kit.AbilityKit';
352import { UIContext } from '@kit.ArkUI';
353
354let modelFile = 'xxx.ms';
355let globalContext = new UIContext().getHostContext() as common.UIAbilityContext;
356globalContext.getApplicationContext().resourceManager.getRawFileContent(modelFile).then((buffer: Uint8Array) => {
357  let modelBuffer = buffer.buffer;
358  mindSporeLite.loadTrainModelFromBuffer(modelBuffer).then((mindSporeLiteModel: mindSporeLite.Model) => {
359    console.info("MSLITE trainMode: ", mindSporeLiteModel.trainMode);
360  })
361})
362```
363
364## mindSporeLite.loadTrainModelFromFd<sup>12+</sup>
365
366loadTrainModelFromFd(model: number, trainCfg?: TrainCfg, context?: Context): Promise&lt;Model&gt;
367
368Loads the training model file from the file descriptor. This API uses a promise to return the result.
369
370**System capability**: SystemCapability.AI.MindSporeLite
371
372**Parameters**
373
374| Name  | Type                   | Mandatory| Description                                         |
375| -------- | ----------------------- | ---- | --------------------------------------------- |
376| model    | number                  | Yes  | File descriptor of the training model.                       |
377| trainCfg | [TrainCfg](#traincfg12) | No  | Configure model training parameters. The default value is an array of the default values of attributes in **TrainCfg**. |
378| context  | [Context](#context)     | No  | Configuration information of the running environment. By default, **CpuDevice** is used for initialization.|
379
380**Return value**
381
382| Type                    | Description                        |
383| ------------------------ | ---------------------------- |
384| Promise<[Model](#model)> | Promise used to return the result, which is a **Model** object.|
385
386**Example**
387
388```ts
389import { fileIo } from '@kit.CoreFileKit';
390let modelFile = '/path/to/xxx.ms';
391let file = fileIo.openSync(modelFile, fileIo.OpenMode.READ_ONLY);
392mindSporeLite.loadTrainModelFromFd(file.fd).then((mindSporeLiteModel: mindSporeLite.Model) => {
393  console.info("MSLITE trainMode: ", mindSporeLiteModel.trainMode);
394});
395```
396
397## mindSporeLite.getAllNNRTDeviceDescriptions<sup>12+</sup>
398
399getAllNNRTDeviceDescriptions() : NNRTDeviceDescription[]
400
401Obtains all device descriptions in NNRt.
402
403**System capability**: SystemCapability.AI.MindSporeLite
404
405**Return value**
406
407| Type                                               | Description                  |
408| --------------------------------------------------- | ---------------------- |
409| [NNRTDeviceDescription](#nnrtdevicedescription12)[] | NNRt device description array.|
410
411**Example**
412
413```ts
414let allDevices = mindSporeLite.getAllNNRTDeviceDescriptions();
415if (allDevices == null) {
416  console.error('MS_LITE_LOG: getAllNNRTDeviceDescriptions is NULL.');
417}
418```
419
420## Context
421
422Defines the configuration information of the running environment.
423
424### Attributes
425
426**System capability**: SystemCapability.AI.MindSporeLite
427
428
429| Name  | Type                     | Read Only| Optional| Description                                                        |
430| ------ | ------------------------- | ---- | ---- | ------------------------------------------------------------ |
431| target | string[]                  | No  | Yes  | Target backend. The value can be **cpu** or **nnrt**. The default value is **cpu**.                |
432| cpu    | [CpuDevice](#cpudevice)   | No  | Yes  | CPU backend device option. Set this parameter set only when **target** is set to **cpu**. The default value is an array of the default values of attributes in **CpuDevice**.|
433| nnrt   | [NNRTDevice](#nnrtdevice) | No  | Yes  | NNRt backend device option. Set this parameter set only when **target** is set to **nnrt**. The default value is an array of the default values of attributes in **NNRTDevice**.|
434
435**Example**
436
437```ts
438let context: mindSporeLite.Context = {};
439context.target = ['cpu','nnrt'];
440```
441
442## CpuDevice
443
444Defines the CPU backend device option.
445
446### Attributes
447
448**System capability**: SystemCapability.AI.MindSporeLite
449
450| Name                  | Type                                     | Read Only| Optional| Description                                                        |
451| ---------------------- | ----------------------------------------- | ---- | ---- | ------------------------------------------------------------ |
452| threadNum              | number                                    | No  | Yes  | Number of runtime threads. The default value is **2**.                             |
453| threadAffinityMode     | [ThreadAffinityMode](#threadaffinitymode) | No  | Yes  | Affinity mode for binding runtime threads to CPU cores. The default value is **mindSporeLite.ThreadAffinityMode.NO_AFFINITIES**.|
454| threadAffinityCoreList | number[]                                  | No  | Yes  | List of CPU cores bound to runtime threads. Set this parameter only when **threadAffinityMode** is set. If **threadAffinityMode** is set to **mindSporeLite.ThreadAffinityMode.NO_AFFINITIES**, this parameter is empty. The number in the list indicates the SN of the CPU core. The default value is **[]**.|
455| precisionMode          | string                                    | No  | Yes  | Whether to enable the Float16 inference mode. The value **preferred_fp16** means to enable half-precision inference and the default value **enforce_fp32** means to disable half-precision inference. Other settings are not supported.|
456
457**Float16 inference mode**: a mode that uses half-precision inference. Float16 uses 16 bits to represent a number and therefore it is also called half-precision.
458
459**Example**
460
461```ts
462let context: mindSporeLite.Context = {};
463context.cpu = {};
464context.target = ['cpu'];
465context.cpu.threadNum = 2;
466context.cpu.threadAffinityMode = 0;
467context.cpu.precisionMode = 'preferred_fp16';
468context.cpu.threadAffinityCoreList = [0, 1, 2];
469```
470
471## ThreadAffinityMode
472
473Specifies the affinity mode for binding runtime threads to CPU cores.
474
475**System capability**: SystemCapability.AI.MindSporeLite
476
477| Name              | Value  | Description        |
478| ------------------ | ---- | ------------ |
479| NO_AFFINITIES      | 0    | No affinities.    |
480| BIG_CORES_FIRST    | 1    | Big cores first.|
481| LITTLE_CORES_FIRST | 2    | Medium cores first.|
482
483## NNRTDevice
484
485Represents an NNRt device. Neural Network Runtime (NNRt) is a bridge that connects the upper-layer AI inference framework to the bottom-layer acceleration chip to implement cross-chip inference and computing of AI models. An NNRt backend can be configured for MindSpore Lite.
486
487### Attributes
488
489**System capability**: SystemCapability.AI.MindSporeLite
490
491| Name                         | Type                               | Read Only| Optional| Description                    |
492| ----------------------------- | ----------------------------------- | ---- | ------------------------ | ------------------------ |
493| deviceID<sup>12+</sup>        | bigint                              | No| Yes | NNRt device ID. The default value is **0**.    |
494| performanceMode<sup>12+</sup> | [PerformanceMode](#performancemode12) | No | Yes | NNRt device performance mode. The default value is **PERFORMANCE_NONE**.|
495| priority<sup>12+</sup>        | [Priority](#priority12)               | No | Yes | NNRt inference task priority. The default value is **PRIORITY_MEDIUM**.|
496| extensions<sup>12+</sup>      | [Extension](#extension12)[]         | No | Yes | Extended NNRt device configuration. This parameter is left empty by default.|
497
498## PerformanceMode<sup>12+</sup>
499
500Enumerates NNRt device performance modes.
501
502**System capability**: SystemCapability.AI.MindSporeLite
503
504| Name               | Value  | Description               |
505| ------------------- | ---- | ------------------- |
506| PERFORMANCE_NONE    | 0    | No special settings.       |
507| PERFORMANCE_LOW     | 1    | Low power consumption.       |
508| PERFORMANCE_MEDIUM  | 2    | Power consumption and performance balancing.|
509| PERFORMANCE_HIGH    | 3    | High performance.       |
510| PERFORMANCE_EXTREME | 4    | Ultimate performance.     |
511
512## Priority<sup>12+</sup>
513
514Enumerates NNRt inference task priorities.
515
516**System capability**: SystemCapability.AI.MindSporeLite
517
518| Name           | Value  | Description          |
519| --------------- | ---- | -------------- |
520| PRIORITY_NONE   | 0    | No priority preference.|
521| PRIORITY_LOW    | 1    | Low priority.|
522| PRIORITY_MEDIUM | 2    | Medium priority.|
523| PRIORITY_HIGH   | 3    | High priority.|
524
525## Extension<sup>12+</sup>
526
527Defines the extended NNRt device configuration.
528
529### Attributes
530
531**System capability**: SystemCapability.AI.MindSporeLite
532
533| Name               | Type       | Read Only| Optional| Description            |
534| ------------------- | ----------- | ---- | ---- | ---------------- |
535| name<sup>12+</sup>  | string      | No  | No  | Configuration name.      |
536| value<sup>12+</sup> | ArrayBuffer | No  | No  | Memory accommodating the extended configuration.|
537
538## NNRTDeviceDescription<sup>12+</sup>
539
540Defines NNRt device information, including the device ID and device name.
541
542**System capability**: SystemCapability.AI.MindSporeLite
543
544### deviceID
545
546deviceID() : bigint
547
548Obtains the NNRt device ID.
549
550**System capability**: SystemCapability.AI.MindSporeLite
551
552**Return value**
553
554| Type  | Description        |
555| ------ | ------------ |
556| bigint | NNRt device ID.|
557
558**Example**
559
560```ts
561let allDevices = mindSporeLite.getAllNNRTDeviceDescriptions();
562if (allDevices == null) {
563  console.error('getAllNNRTDeviceDescriptions is NULL.');
564}
565let context: mindSporeLite.Context = {};
566context.target = ["nnrt"];
567context.nnrt = {};
568for (let i: number = 0; i < allDevices.length; i++) {
569  console.info(allDevices[i].deviceID().toString());
570}
571```
572
573### deviceType
574
575deviceType() : NNRTDeviceType
576
577Obtains the device model.
578
579**System capability**: SystemCapability.AI.MindSporeLite
580
581**Return value**
582
583| Type                               | Description          |
584| ----------------------------------- | -------------- |
585| [NNRTDeviceType](#nnrtdevicetype12) | NNRt device type.|
586
587**Example**
588
589```ts
590let allDevices = mindSporeLite.getAllNNRTDeviceDescriptions();
591if (allDevices == null) {
592  console.error('getAllNNRTDeviceDescriptions is NULL.');
593}
594let context: mindSporeLite.Context = {};
595context.target = ["nnrt"];
596context.nnrt = {};
597for (let i: number = 0; i < allDevices.length; i++) {
598  console.info(allDevices[i].deviceType().toString());
599}
600```
601
602### deviceName
603
604deviceName() : string
605
606Obtains the NNRt device name.
607
608**System capability**: SystemCapability.AI.MindSporeLite
609
610**Return value**
611
612| Type  | Description          |
613| ------ | -------------- |
614| string | NNRt device name.|
615
616**Example**
617
618```ts
619let allDevices = mindSporeLite.getAllNNRTDeviceDescriptions();
620if (allDevices == null) {
621  console.error('getAllNNRTDeviceDescriptions is NULL.');
622}
623let context: mindSporeLite.Context = {};
624context.target = ["nnrt"];
625context.nnrt = {};
626for (let i: number = 0; i < allDevices.length; i++) {
627  console.info(allDevices[i].deviceName().toString());
628}
629```
630
631## NNRTDeviceType<sup>12+</sup>
632
633Enumerates NNRt device types.
634
635**System capability**: SystemCapability.AI.MindSporeLite
636
637| Name                  | Value  | Description                               |
638| ---------------------- | ---- | ----------------------------------- |
639| NNRTDEVICE_OTHERS      | 0    | Others (any device type except the following three types).|
640| NNRTDEVICE_CPU         | 1    | CPU.                          |
641| NNRTDEVICE_GPU         | 2    | GPU.                          |
642| NNRTDEVICE_ACCELERATOR | 3    | Specific acceleration device.                   |
643
644## TrainCfg<sup>12+</sup>
645
646Defines the configuration for on-device training.
647
648### Attributes
649
650**System capability**: SystemCapability.AI.MindSporeLite
651
652| Name                           | Type                                     | Read Only| Optional| Description                                                        |
653| ------------------------------- | ----------------------------------------- | ---- | ---- | ------------------------------------------------------------ |
654| lossName<sup>12+</sup>          | string[]                                  | No  | Yes  | List of loss functions. The default value is ["loss\_fct", "\_loss\_fn", "SigmoidCrossEntropy"].|
655| optimizationLevel<sup>12+</sup> | [OptimizationLevel](#optimizationlevel12) | No  | Yes  | Network optimization level for on-device training. The default value is **O0**.                        |
656
657**Example**
658
659```ts
660let cfg: mindSporeLite.TrainCfg = {};
661cfg.lossName = ["loss_fct", "_loss_fn", "SigmoidCrossEntropy"];
662cfg.optimizationLevel = mindSporeLite.OptimizationLevel.O0;
663```
664
665## OptimizationLevel<sup>12+</sup>
666
667Enumerates network optimization levels for on-device training.
668
669**System capability**: SystemCapability.AI.MindSporeLite
670
671| Name| Value  | Description                                                      |
672| ---- | ---- | ---------------------------------------------------------- |
673| O0   | 0    | No optimization level.                                              |
674| O2   | 2    | Converts the precision type of the network to float16 and keeps the precision type of the batch normalization layer and loss function as float32.|
675| O3   | 3    | Converts the precision type of the network (including the batch normalization layer) to float16.                   |
676| AUTO | 4    | Selects an optimization level based on the device.                                    |
677
678
679## QuantizationType<sup>12+</sup>
680
681Enumerates quantization types.
682
683**System capability**: SystemCapability.AI.MindSporeLite
684
685| Name        | Value  | Description      |
686| ------------ | ---- | ---------- |
687| NO_QUANT     | 0    | No quantification.|
688| WEIGHT_QUANT | 1    | Weight quantization.|
689| FULL_QUANT   | 2    | Full quantization.  |
690
691## Model
692
693Represents a **Model** instance, with properties and APIs defined.
694
695In the following sample code, you first need to use [loadModelFromFile()](#mindsporeliteloadmodelfromfile), [loadModelFromBuffer()](#mindsporeliteloadmodelfrombuffer), or [loadModelFromFd()](#mindsporeliteloadmodelfromfd) to obtain a **Model** instance before calling related APIs.
696
697### Attributes
698
699**System capability**: SystemCapability.AI.MindSporeLite
700
701| Name                      | Type   | Read Only| Optional| Description                                                        |
702| -------------------------- | ------- | ---- | ---- | ------------------------------------------------------------ |
703| learningRate<sup>12+</sup> | number  | No  | Yes  | Learning rate of a training model. The default value is read from the loaded model.                |
704| trainMode<sup>12+</sup>    | boolean | No  | Yes  | Training mode. The value **true** indicates the training mode, and the value **false** indicates the non-training mode. The default value is **true** for a training model and **false** for an inference model.|
705
706### getInputs
707
708getInputs(): MSTensor[]
709
710Obtains the model input for inference.
711
712**System capability**: SystemCapability.AI.MindSporeLite
713
714**Return value**
715
716| Type                   | Description              |
717| ----------------------- | ------------------ |
718| [MSTensor](#mstensor)[] | **MSTensor** object.|
719
720**Example**
721
722```ts
723let modelFile = '/path/to/xxx.ms';
724mindSporeLite.loadModelFromFile(modelFile).then((mindSporeLiteModel : mindSporeLite.Model) => {
725  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
726  console.info(modelInputs[0].name);
727})
728```
729### predict
730
731predict(inputs: MSTensor[], callback: Callback&lt;MSTensor[]&gt;): void
732
733Executes the inference model. This API uses an asynchronous callback to return the result. Ensure that the model object is not reclaimed when being invoked.
734
735**System capability**: SystemCapability.AI.MindSporeLite
736
737**Parameters**
738
739| Name| Type                   | Mandatory| Description                      |
740| ------ | ----------------------- | ---- | -------------------------- |
741| inputs | [MSTensor](#mstensor)[] | Yes  | List of input models.  |
742| callback | Callback<[MSTensor](#mstensor)[]> | Yes  | Callback used to return the result, which is a list of **MSTensor** objects.|
743
744**Example**
745
746```ts
747import { mindSporeLite } from '@kit.MindSporeLiteKit';
748import { common } from '@kit.AbilityKit';
749import { UIContext } from '@kit.ArkUI';
750
751let inputName = 'input_data.bin';
752let globalContext = new UIContext().getHostContext() as common.UIAbilityContext;
753globalContext.getApplicationContext().resourceManager.getRawFileContent(inputName).then(async (buffer : Uint8Array) => {
754  let inputBuffer = buffer.buffer;
755  let modelFile : string = '/path/to/xxx.ms';
756  let mindSporeLiteModel : mindSporeLite.Model = await mindSporeLite.loadModelFromFile(modelFile);
757  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
758
759  modelInputs[0].setData(inputBuffer);
760  mindSporeLiteModel.predict(modelInputs, (mindSporeLiteTensor : mindSporeLite.MSTensor[]) => {
761    let output = new Float32Array(mindSporeLiteTensor[0].getData());
762    for (let i = 0; i < output.length; i++) {
763      console.info('MS_LITE_LOG: ' + output[i].toString());
764    }
765  })
766})
767```
768### predict
769
770predict(inputs: MSTensor[]): Promise&lt;MSTensor[]&gt;
771
772Executes model inference. This API uses a promise to return the result. Ensure that the model object is not reclaimed when being invoked.
773
774**System capability**: SystemCapability.AI.MindSporeLite
775
776**Parameters**
777
778| Name| Type                   | Mandatory| Description                          |
779| ------ | ----------------------- | ---- | ------------------------------ |
780| inputs | [MSTensor](#mstensor)[] | Yes  | List of input models.  |
781
782**Return value**
783
784| Type                   | Description                  |
785| ----------------------- | ---------------------- |
786| Promise<[MSTensor](#mstensor)[]> | Promise used to return the result, List of **MSTensor** objects.|
787
788**Example**
789
790```ts
791import { mindSporeLite } from '@kit.MindSporeLiteKit';
792import { common } from '@kit.AbilityKit';
793import { UIContext } from '@kit.ArkUI';
794
795let inputName = 'input_data.bin';
796let globalContext = new UIContext().getHostContext() as common.UIAbilityContext;
797globalContext.getApplicationContext().resourceManager.getRawFileContent(inputName).then(async (buffer : Uint8Array) => {
798  let inputBuffer = buffer.buffer;
799  let modelFile = '/path/to/xxx.ms';
800  let mindSporeLiteModel : mindSporeLite.Model = await mindSporeLite.loadModelFromFile(modelFile);
801  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
802  modelInputs[0].setData(inputBuffer);
803  mindSporeLiteModel.predict(modelInputs).then((mindSporeLiteTensor : mindSporeLite.MSTensor[]) => {
804    let output = new Float32Array(mindSporeLiteTensor[0].getData());
805    for (let i = 0; i < output.length; i++) {
806      console.info(output[i].toString());
807    }
808  })
809})
810```
811
812### resize
813
814resize(inputs: MSTensor[], dims: Array&lt;Array&lt;number&gt;&gt;): boolean
815
816Resets the tensor size.
817
818**System capability**: SystemCapability.AI.MindSporeLite
819
820**Parameters**
821
822| Name| Type                 | Mandatory| Description                         |
823| ------ | --------------------- | ---- | ----------------------------- |
824| inputs | [MSTensor](#mstensor)[]            | Yes  | List of input models. |
825| dims   | Array&lt;Array&lt;number&gt;&gt; | Yes  | Target tensor size.|
826
827**Return value**
828
829| Type   | Description                                                        |
830| ------- | ------------------------------------------------------------ |
831| boolean | Result indicating whether the setting is successful. The value **true** indicates that the tensor size is successfully reset, and the value **false** indicates the opposite.|
832
833**Example**
834
835```ts
836let modelFile = '/path/to/xxx.ms';
837mindSporeLite.loadModelFromFile(modelFile).then((mindSporeLiteModel : mindSporeLite.Model) => {
838  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
839  let new_dim = new Array([1,32,32,1]);
840  mindSporeLiteModel.resize(modelInputs, new_dim);
841})
842```
843
844### runStep<sup>12+</sup>
845
846runStep(inputs: MSTensor[]): boolean
847
848Defines a single-step training model. This API is used only for on-device training.
849
850**System capability**: SystemCapability.AI.MindSporeLite
851
852**Parameters**
853
854| Name   | Type                     | Mandatory | Description      |
855| ------ | ----------------------- | --- | -------- |
856| inputs | [MSTensor](#mstensor)[] | Yes  | List of input models.|
857
858**Return value**
859
860| Type   | Description                                                        |
861| ------- | ------------------------------------------------------------ |
862| boolean | Result indicating whether the operation is successful. The value **true** indicates that the operation is successful, and the value **false** indicates the opposite.|
863
864**Example**
865
866```ts
867let modelFile = '/path/to/xxx.ms';
868mindSporeLite.loadTrainModelFromFile(modelFile).then((mindSporeLiteModel: mindSporeLite.Model) => {
869  mindSporeLiteModel.trainMode = true;
870  const modelInputs = mindSporeLiteModel.getInputs();
871  let ret = mindSporeLiteModel.runStep(modelInputs);
872  if (ret == false) {
873    console.error('MS_LITE_LOG: runStep failed.')
874  }
875})
876```
877
878### getWeights<sup>12+</sup>
879
880getWeights(): MSTensor[]
881
882Obtains all weight tensors of a model. This API is used only for on-device training.
883
884**System capability**: SystemCapability.AI.MindSporeLite
885
886**Return value**
887
888| Type                     | Description        |
889| ----------------------- | ---------- |
890| [MSTensor](#mstensor)[] | Weight tensor of the training model.|
891
892**Example**
893
894```ts
895import { mindSporeLite } from '@kit.MindSporeLiteKit';
896import { common } from '@kit.AbilityKit';
897import { UIContext } from '@kit.ArkUI';
898
899let modelFile = 'xxx.ms';
900let globalContext = new UIContext().getHostContext() as common.UIAbilityContext;
901globalContext.getApplicationContext().resourceManager.getRawFileContent(modelFile).then((modelBuffer : Uint8Array) => {
902  mindSporeLite.loadTrainModelFromBuffer(modelBuffer.buffer.slice(0)).then((mindSporeLiteModel: mindSporeLite.Model) => {
903    mindSporeLiteModel.trainMode = true;
904    const weights = mindSporeLiteModel.getWeights();
905    for (let i = 0; i < weights.length; i++) {
906      let printStr = weights[i].name + ", ";
907      printStr += weights[i].shape + ", ";
908      printStr += weights[i].dtype + ", ";
909      printStr += weights[i].dataSize + ", ";
910      printStr += weights[i].getData();
911      console.info("MS_LITE weights: ", printStr);
912    }
913  })
914})
915```
916
917### updateWeights<sup>12+</sup>
918
919updateWeights(weights: MSTensor[]): boolean
920
921Weight of the updated model, which is used only for on-device training.
922
923**System capability**: SystemCapability.AI.MindSporeLite
924
925**Parameters**
926
927| Name | Type                   | Mandatory| Description          |
928| ------- | ----------------------- | ---- | -------------- |
929| weights | [MSTensor](#mstensor)[] | Yes  | List of weight tensors.|
930
931**Return value**
932
933| Type   | Description                                                        |
934| ------- | ------------------------------------------------------------ |
935| boolean | Result indicating whether the operation is successful. The value **true** indicates that the operation is successful, and the value **false** indicates the opposite.|
936
937**Example**
938
939```ts
940import { mindSporeLite } from '@kit.MindSporeLiteKit';
941import { common } from '@kit.AbilityKit';
942import { UIContext } from '@kit.ArkUI';
943
944let modelFile = 'xxx.ms';
945let globalContext = new UIContext().getHostContext() as common.UIAbilityContext;
946globalContext.getApplicationContext().resourceManager.getRawFileContent(modelFile).then((modelBuffer : Uint8Array) => {
947  mindSporeLite.loadTrainModelFromBuffer(modelBuffer.buffer.slice(0)).then((mindSporeLiteModel: mindSporeLite.Model) => {
948    mindSporeLiteModel.trainMode = true;
949    const weights = mindSporeLiteModel.getWeights();
950    let ret = mindSporeLiteModel.updateWeights(weights);
951    if (ret == false) {
952      console.error('MS_LITE_LOG: updateWeights failed.')
953    }
954  })
955})
956```
957
958### setupVirtualBatch<sup>12+</sup>
959
960setupVirtualBatch(virtualBatchMultiplier: number, lr: number, momentum: number): boolean
961
962Sets the virtual batch for training. This API is used only for on-device training.
963
964**System capability**: SystemCapability.AI.MindSporeLite
965
966**Parameters**
967
968| Name                | Type  | Mandatory| Description                                                |
969| ---------------------- | ------ | ---- | ---------------------------------------------------- |
970| virtualBatchMultiplier | number | Yes  | Virtual batch multiplier. If the value is less than **1**, the virtual batch is disabled.|
971| lr                     | number | Yes  | Learning rate.                                            |
972| momentum               | number | Yes  | Momentum.                                              |
973
974**Return value**
975
976| Type   | Description                                                        |
977| ------- | ------------------------------------------------------------ |
978| boolean | Result indicating whether the operation is successful. The value **true** indicates that the operation is successful, and the value **false** indicates the opposite.|
979
980**Example**
981
982```ts
983import { mindSporeLite } from '@kit.MindSporeLiteKit';
984import { common } from '@kit.AbilityKit';
985import { UIContext } from '@kit.ArkUI';
986
987let modelFile = 'xxx.ms';
988let globalContext = new UIContext().getHostContext() as common.UIAbilityContext;
989globalContext.getApplicationContext().resourceManager.getRawFileContent(modelFile).then((modelBuffer : Uint8Array) => {
990  mindSporeLite.loadTrainModelFromBuffer(modelBuffer.buffer.slice(0)).then((mindSporeLiteModel: mindSporeLite.Model) => {
991    mindSporeLiteModel.trainMode = true;
992    let ret = mindSporeLiteModel.setupVirtualBatch(2,-1,-1);
993    if (ret == false) {
994      console.error('MS_LITE setupVirtualBatch failed.')
995    }
996  })
997})
998```
999
1000### exportModel<sup>12+</sup>
1001
1002exportModel(modelFile: string, quantizationType?: QuantizationType, exportInferenceOnly?: boolean, outputTensorName?: string[]): boolean
1003
1004Exports a training model. This API is used only for on-device training.
1005
1006**System capability**: SystemCapability.AI.MindSporeLite
1007
1008**Parameters**
1009
1010| Name             | Type                                   | Mandatory| Description                                                        |
1011| ------------------- | --------------------------------------- | ---- | ------------------------------------------------------------ |
1012| modelFile           | string                                  | Yes  | File path of the training models.                                        |
1013| quantizationType    | [QuantizationType](#quantizationtype12) | No  | Quantization type. The default value is **NO_QUANT**.                                  |
1014| exportInferenceOnly | boolean                                 | No  | Whether to export inference models only. The value **true** means to export only inference models, and the value **false** means to export both training and inference models. The default value is **true**.|
1015| outputTensorName    | string[]                                | No  | Name of the output tensor of the exported training model. The default value is an empty string array, which indicates full export.|
1016
1017**Return value**
1018
1019| Type   | Description                                                        |
1020| ------- | ------------------------------------------------------------ |
1021| boolean | Result indicating whether the operation is successful. The value **true** indicates that the operation is successful, and the value **false** indicates the opposite.|
1022
1023**Example**
1024
1025```ts
1026let modelFile = '/path/to/xxx.ms';
1027let newPath = '/newpath/to';
1028mindSporeLite.loadTrainModelFromFile(modelFile).then((mindSporeLiteModel: mindSporeLite.Model) => {
1029  mindSporeLiteModel.trainMode = true;
1030  let ret = mindSporeLiteModel.exportModel(newPath + "/new_model.ms", mindSporeLite.QuantizationType.NO_QUANT, true);
1031  if (ret == false) {
1032    console.error('MS_LITE exportModel failed.')
1033  }
1034})
1035```
1036
1037
1038### exportWeightsCollaborateWithMicro<sup>12+</sup>
1039
1040exportWeightsCollaborateWithMicro(weightFile: string, isInference?: boolean, enableFp16?: boolean, changeableWeightsName?: string[]): boolean
1041
1042Exports model weights for micro inference. This API is available only for on-device training.
1043
1044Micro inference is a ultra-lightweight micro AI deployment solution provided by MindSpore Lite to deploy hardware backends for Micro Controller Units (MCUs). This solution directly converts models into lightweight code in offline mode, eliminating the need for online model parsing and graph compilation.
1045
1046**System capability**: SystemCapability.AI.MindSporeLite
1047
1048**Parameters**
1049
1050| Name               | Type    | Mandatory| Description                                                        |
1051| --------------------- | -------- | ---- | ------------------------------------------------------------ |
1052| weightFile            | string   | Yes  | Path of the weight file.                                              |
1053| isInference           | boolean  | No  | Whether to export weights from the inference model. The value **true** means to export weights from the inference model. The default value is **true**. Currently, only **true** is supported.|
1054| enableFp16            | boolean  | No  | Whether to store floating-point weights in float16 format. The value **true** means to store floating-point weights in float16 format, and the value **false** means the opposite. The default value is **false**.|
1055| changeableWeightsName | string[] | No  | Name of the variable weight. The default value is an empty string array.                    |
1056
1057**Return value**
1058
1059| Type   | Description                                                        |
1060| ------- | ------------------------------------------------------------ |
1061| boolean | Result indicating whether the operation is successful. The value **true** indicates that the operation is successful, and the value **false** indicates the opposite.|
1062
1063**Example**
1064
1065```ts
1066let modelFile = '/path/to/xxx.ms';
1067let microWeight = '/path/to/xxx.bin';
1068mindSporeLite.loadTrainModelFromFile(modelFile).then((mindSporeLiteModel: mindSporeLite.Model) => {
1069  let ret = mindSporeLiteModel.exportWeightsCollaborateWithMicro(microWeight);
1070  if (ret == false) {
1071    console.error('MSLITE exportWeightsCollaborateWithMicro failed.')
1072  }
1073})
1074```
1075
1076## MSTensor
1077
1078Represents an **MSTensor** instance, with properties and APIs defined. It is a special data structure similar to arrays and matrices. It is the basic data structure used in MindSpore Lite network operations.
1079
1080In the following sample code, you first need to use [getInputs()](#getinputs) to obtain an **MSTensor** instance before calling related APIs.
1081
1082### Attributes
1083
1084**System capability**: SystemCapability.AI.MindSporeLite
1085
1086| Name      | Type                 | Read Only| Optional| Description                  |
1087| ---------- | --------------------- | ---- | ---- | ---------------------- |
1088| name       | string                | No  | No  | Tensor name.          |
1089| shape      | number[]              | No  | No  | Tensor dimension array.      |
1090| elementNum | number                | No  | No  | Length of the tensor dimension array.|
1091| dataSize   | number                | No  | No  | Length of tensor data.    |
1092| dtype      | [DataType](#datatype) | No  | No  | Tensor data type.      |
1093| format     | [Format](#format)     | No  | No  | Tensor data format.  |
1094
1095**Example**
1096
1097```ts
1098let modelFile = '/path/to/xxx.ms';
1099mindSporeLite.loadModelFromFile(modelFile).then((mindSporeLiteModel : mindSporeLite.Model) => {
1100  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
1101  console.info(modelInputs[0].name);
1102  console.info(modelInputs[0].shape.toString());
1103  console.info(modelInputs[0].elementNum.toString());
1104  console.info(modelInputs[0].dtype.toString());
1105  console.info(modelInputs[0].format.toString());
1106  console.info(modelInputs[0].dataSize.toString());
1107})
1108```
1109
1110### getData
1111
1112getData(): ArrayBuffer
1113
1114Obtains tensor data.
1115
1116**System capability**: SystemCapability.AI.MindSporeLite
1117
1118**Return value**
1119
1120| Type       | Description                |
1121| ----------- | -------------------- |
1122| ArrayBuffer | Pointer to the tensor data.|
1123
1124**Example**
1125
1126```ts
1127import { mindSporeLite } from '@kit.MindSporeLiteKit';
1128import { common } from '@kit.AbilityKit';
1129import { UIContext } from '@kit.ArkUI';
1130
1131let inputName = 'input_data.bin';
1132let globalContext = new UIContext().getHostContext() as common.UIAbilityContext;
1133globalContext.getApplicationContext().resourceManager.getRawFileContent(inputName).then(async (buffer : Uint8Array) => {
1134  let inputBuffer = buffer.buffer;
1135  let modelFile = '/path/to/xxx.ms';
1136  let mindSporeLiteModel : mindSporeLite.Model = await mindSporeLite.loadModelFromFile(modelFile);
1137  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
1138  modelInputs[0].setData(inputBuffer);
1139  mindSporeLiteModel.predict(modelInputs).then((mindSporeLiteTensor : mindSporeLite.MSTensor[]) => {
1140    let output = new Float32Array(mindSporeLiteTensor[0].getData());
1141    for (let i = 0; i < output.length; i++) {
1142      console.info(output[i].toString());
1143    }
1144  })
1145})
1146```
1147
1148### setData
1149
1150setData(inputArray: ArrayBuffer): void
1151
1152Sets the tensor data.
1153
1154**System capability**: SystemCapability.AI.MindSporeLite
1155
1156**Parameters**
1157
1158| Name    | Type       | Mandatory| Description                  |
1159| ---------- | ----------- | ---- | ---------------------- |
1160| inputArray | ArrayBuffer | Yes  | Input data buffer of the tensor.|
1161
1162**Example**
1163
1164```ts
1165import { mindSporeLite } from '@kit.MindSporeLiteKit';
1166import { common } from '@kit.AbilityKit';
1167import { UIContext } from '@kit.ArkUI';
1168
1169let inputName = 'input_data.bin';
1170let globalContext = new UIContext().getHostContext() as common.UIAbilityContext;
1171globalContext.getApplicationContext().resourceManager.getRawFileContent(inputName).then(async (buffer : Uint8Array) => {
1172  let inputBuffer = buffer.buffer;
1173  let modelFile = '/path/to/xxx.ms';
1174  let mindSporeLiteModel : mindSporeLite.Model = await mindSporeLite.loadModelFromFile(modelFile);
1175  let modelInputs : mindSporeLite.MSTensor[] = mindSporeLiteModel.getInputs();
1176  modelInputs[0].setData(inputBuffer);
1177})
1178```
1179
1180## DataType
1181
1182Tensor data type.
1183
1184**System capability**: SystemCapability.AI.MindSporeLite
1185
1186| Name               | Value  | Description               |
1187| ------------------- | ---- | ------------------- |
1188| TYPE_UNKNOWN        | 0    | Unknown type.         |
1189| NUMBER_TYPE_INT8    | 32   | Int8 type.   |
1190| NUMBER_TYPE_INT16   | 33   | Int16 type.  |
1191| NUMBER_TYPE_INT32   | 34   | Int32 type.  |
1192| NUMBER_TYPE_INT64   | 35   | Int64 type.  |
1193| NUMBER_TYPE_UINT8   | 37   | UInt8 type.  |
1194| NUMBER_TYPE_UINT16  | 38   | UInt16 type. |
1195| NUMBER_TYPE_UINT32  | 39   | UInt32 type. |
1196| NUMBER_TYPE_UINT64  | 40   | UInt64 type. |
1197| NUMBER_TYPE_FLOAT16 | 42   | Float16 type.|
1198| NUMBER_TYPE_FLOAT32 | 43   | Float32 type.|
1199| NUMBER_TYPE_FLOAT64 | 44   | Float64 type.|
1200
1201## Format
1202
1203Enumerates tensor data formats.
1204
1205**System capability**: SystemCapability.AI.MindSporeLite
1206
1207| Name          | Value  | Description                 |
1208| -------------- | ---- | --------------------- |
1209| DEFAULT_FORMAT | -1   | Unknown data format.   |
1210| NCHW           | 0    | NCHW format. |
1211| NHWC           | 1    | NHWC format. |
1212| NHWC4          | 2    | NHWC4 format.|
1213| HWKC           | 3    | HWKC format. |
1214| HWCK           | 4    | HWCK format. |
1215| KCHW           | 5    | KCHW format. |
1216