• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# OH_NN_QuantParam
2
3
4## Overview
5
6Quantization information.
7
8In quantization scenarios, the 32-bit floating-point data type is quantized into the fixed-point data type according to the following formula:
9
10s and z are quantization parameters, which are stored by **scale** and **zeroPoint** in OH_NN_QuantParam. r is a floating point number, q is the quantization result, q_min is the lower bound of the quantization result, and q_max is an upper bound of a quantization result. The calculation method is as follows:
11
12The clamp function is defined as follows:
13
14**Since:**
159
16
17**Related Modules:**
18
19[NeuralNeworkRuntime](_neural_nework_runtime.md)
20
21
22## Summary
23
24
25### Member Variables
26
27| Name | Description |
28| -------- | -------- |
29| [quantCount](#quantcount) | Specifies the length of the numBits, scale, and zeroPoint arrays. In the per-layer quantization scenario, **quantCount** is usually set to **1**. That is, all channels of a tensor share a set of quantization parameters. In the per-channel quantization scenario, **quantCount** is usually the same as the number of tensor channels, and each channel uses its own quantization parameters.  |
30| [numBits](#numbits) | Number of quantization bits  |
31| [scale](#scale) | Pointer to the scale data in the quantization formula  |
32| [zeroPoint](#zeropoint) | Pointer to the zero point data in the quantization formula  |
33
34
35## Member Variable Description
36
37
38### numBits
39
40
41```
42const uint32_t* OH_NN_QuantParam::numBits
43```
44**Description**<br>
45Number of quantization bits
46
47
48### quantCount
49
50
51```
52uint32_t OH_NN_QuantParam::quantCount
53```
54**Description**<br>
55Specifies the length of the numBits, scale, and zeroPoint arrays. In the per-layer quantization scenario, **quantCount** is usually set to **1**. That is, all channels of a tensor share a set of quantization parameters. In the per-channel quantization scenario, **quantCount** is usually the same as the number of tensor channels, and each channel uses its own quantization parameters.
56
57
58### scale
59
60
61```
62const double* OH_NN_QuantParam::scale
63```
64**Description**<br>
65Pointer to the scale data in the quantization formula
66
67
68### zeroPoint
69
70
71```
72const int32_t* OH_NN_QuantParam::zeroPoint
73```
74**Description**<br>
75Pointer to the zero point data in the quantization formula
76