• Home
Name Date Size #Lines LOC

..--

test_harness/03-May-2024-2,3421,827

tests/03-May-2024-25,21623,461

README.mdD03-May-202415.9 KiB469338

example_generator.pyD03-May-20246.3 KiB176133

spec_visualizer.pyD03-May-20248.6 KiB262188

spec_viz_template.htmlD03-May-202414.7 KiB439393

test_generator.pyD03-May-202452.9 KiB1,4241,116

README.md

1# Using the NN-API Test Generator
2
3## Prerequisites
4
5- Python3
6- Numpy
7
8## Writing a Test Specification
9
10You should create new test specs in `runtime/test/specs/<version>/` and name it with `.mod.py` suffix, so that other tools can automatically update the unit tests.
11
12### Specifying Operands
13
14#### Syntax
15
16```
17OperandType(name, (type, shape, <optional scale, zero point>), <optional initializer>)
18```
19
20For example,
21
22```Python
23# p1 is a 2-by-2 fp matrix parameter, with value [1, 2; 3, 4]
24p1 = Parameter("param", ("TENSOR_FLOAT32", [2, 2]), [1, 2, 3, 4])
25
26# i1 is a quantized input of shape (2, 256, 256, 3), with scale = 0.5, zero point = 128
27i1 = Input("input", ("TENSOR_QUANT8_ASYMM", [2, 256, 256, 3], 0.5, 128))
28
29# p2 is an Int32 scalar with value 1
30p2 = Int32Scalar("act", 1)
31```
32
33#### OperandType
34
35There are currently 10 operand types supported by the test generator.
36
37- Input
38- Output
39    * IgnoredOutput, will not compare results in the test
40- Parameter
41    * Int32Scalar, shorthand for parameter with type INT32
42    * Float32Scalar, shorthand for parameter with type FLOAT32
43    * Int32Vector, shorthand for 1-D TENSOR_INT32 parameter
44    * Float32Vector, shorthand for 1-D TENSOR_FLOAT32 parameter
45    * SubgraphReference, shortcut for a SUBGRAPH parameter
46- Internal, for model with multiple operations
47
48### Specifying Models
49
50#### Instantiate a model
51
52```Python
53# Instantiate a model
54model = Model()
55
56# Instantiate a model with a name
57model2 = Model("model_name")
58```
59
60#### Add an operation
61
62```
63model.Operation(optype, i1, i2, ...).To(o1, o2, ...)
64```
65
66For example,
67
68```Python
69model.Operation("ADD", i1, i2, act).To(o1)
70```
71
72#### Use implicit operands
73
74Simple scalar and 1-D vector parameters can now be directly passed to Operation constructor, and test generator will deduce the operand type from the value provided.
75
76```Python
77model.Operation("MEAN", i1, [1], 0) # axis = [1], keep_dims = 0
78```
79
80Note that, for fp values, the initializer should all be Python fp numbers, e.g. use `1.0` or `1.` instead of `1` for implicit fp operands.
81
82### Specifying Inputs and Expected Outputs
83
84The combination of inputs and expected outputs is called an example for a given model. An example is defined like
85
86```Python
87# Example 1, separate dictionary for inputs and outputs
88input1 = {
89    i1: [1, 2],
90    i2: [3, 4]
91}
92output1 = {o1: [4, 6]}
93
94# Example 2, combined dictionary
95example2_values = {
96    i1: [5, 6],
97    i2: [7, 8],
98    o1: [12, 14]
99}
100
101# Instantiate an example
102Example((input1, output1), example2_values)
103```
104
105By default, examples will be attached to the most recent instantiated model. You can explicitly specify the target model, and optionally, the example name by
106
107```Python
108Example((input1, output1), example2_values, model=model, name="example_name")
109```
110
111### Specifying Variations
112
113You can add variations to the example so that the test generator can automatically create multiple tests. The following variations are supported:
114
115- DefaultVariation, i.e. no variation
116- DataTypeConverter
117- DataLayoutConverter
118- AxisConverter
119- RelaxedModeConverter
120- ActivationConverter
121- AllOutputsAsInternalCoverter
122
123#### DataTypeConverter
124
125Convert input/parameter/output to the specified type, e.g. float32 -> quant8. The target data type for each operand to transform has to be explicitly specified. It is the spec writer's responsibility to ensure such conversion is valid.
126
127```Python
128converter = DataTypeConverter(name="variation_name").Identify({
129    op1: (target_type, target_scale, target_zero_point),
130    op2: (target_type, target_scale, target_zero_point),
131    ...
132})
133```
134
135#### DataLayoutConverter
136
137Convert input/parameter/output between NHWC and NCHW. The caller need to provide a list of target operands to transform, and also the data layout parameter to set.
138
139```Python
140converter = DataLayoutConverter(target_data_layout, name="variation_name").Identify(
141    [op1, op2, ..., layout_parameter]
142)
143```
144
145#### AxisConverter
146
147Transpose a certain axis in input/output to target position, and optionally remove some axis. The caller need to provide a list of target operands to transform, and also the axis parameter to set.
148
149```Python
150converter = AxisConverter(originalAxis, targetAxis, dimension, drop=[], name="variation_name").Identify(
151    [op1, op2, ..., axis_parameter]
152)
153```
154
155This model variation is for ops that apply calculation along certain axis, such as L2_NORMALIZATION, SOFTMAX, and CHANNEL_SHUFFLE. For example, consider L2_NORMALIZATION with input of shape [2, 3, 4, 5] along the last axis, i.e. axis = -1. The output shape would be the same as input. We can create a new model which will do the calculation along axis 0 by transposing input and output shape to [5, 2, 3, 4] and modify the axis parameter to 0. Such converter can be defined as
156
157```Python
158toAxis0 = AxisConverter(-1, 0, 4).Identify([input, output, axis])
159```
160
161The target axis can also be negative to test the negative indexing
162
163```Python
164toAxis0 = AxisConverter(-1, -4, 4).Identify([input, output, axis])
165```
166
167Consider the same L2_NORMALIZATION example, we can also create a new model with input/output of 2D shape [4, 5] by removing the first two dimension. This is essentially doing `new_input = input[0,0,:,:]` in numpy. Such converter can be defined as
168
169```Python
170toDim2 = AxisConverter(-1, -1, 4, drop=[0, 1]).Identify([input, output, axis])
171```
172
173If transposition and removal are specified at the same time, the converter will do transposition first and then remove the axis. For example, the following converter will result in shape [5, 4] and axis 0.
174
175```Python
176toDim2Axis0 = AxisConverter(-1, 2, 4, drop=[0, 1]).Identify([input, output, axis])
177```
178
179#### RelaxedModeConverter
180
181Convert the model to enable/disable relaxed computation.
182
183```Python
184converter = RelaxedModeConverter(is_relaxed, name="variation_name")
185```
186
187#### ActivationConverter
188
189Convert the output by certain activation, the original activation is assumed to be NONE. The caller need to provide a list of target operands to transform,  and also the activation parameter to set.
190
191```Python
192converter = ActivationConverter(name="variation_name").Identify(
193    [op1, op2, ..., act_parameter]
194)
195```
196
197#### AllOutputsAsInternalCoverter
198
199Add a placeholder ADD operation after each model output to make it as an internal operand. Will skip if the model does not have any output tensor that is compatible with the ADD operation or if the model has more than one operation.
200
201#### Add variation to example
202
203Each example can have multiple groups of variations, and if so, will take the cartesian product of the groups. For example, suppose we declare a model with two groups, and each group has two variations: `[[default, nchw], [default, relaxed, quant8]]`. This will result in 6 examples: `[default, default], [default, relaxed], [default, quant8], [nchw, default], [nchw, relaxed], [nchw, quant8]`.
204
205Use `AddVariations` to add a group of variations to the example
206
207```Python
208# Add two groups of variations [default, nchw] and [default, relaxed, quant8]
209example.AddVariations(nchw).AddVariations(relaxed, quant8)
210```
211
212By default, when you add a group of variation, a unnamed default variation will be automatically included in the list. You can name the default variation by
213
214```Python
215example.AddVariations(nchw, defaultName="nhwc").AddVariations(relaxed, quant8)
216```
217
218Also, you can choose not to include default by
219
220```Python
221# Add two groups of variations [nchw] and [default, relaxed, quant8]
222example.AddVariations(nchw, includeDefault=False).AddVariations(relaxed, quant8)
223```
224
225The example above will result in 3 examples: `[nchw, default], [nchw, relaxed], [nchw, quant8]`.
226
227#### Default variations
228
229By default, the test generator will apply the following variations automatically.
230
231- **AllTensorsAsInputsConverter:** Convert all constant tensors in the model to model inputs. Will skip if the model does not have any constant tensor, or if the model has more than one operations. If not explicitly disabled, this variation will be automatically applied to all tests.
232
233- **AllInputsAsInternalCoverter:** Add a placeholder ADD operation before each model input to make it as an internal operand. Will skip if the model does not have any input tensor that is compatible to the ADD operation, or if the model has more than one operations. If not explicitly disabled, this variation will be automatically applied to all tests.
234
235You can opt-out by invoking the corresponding methods on examples.
236
237```Python
238# Disable AllTensorsAsInputsConverter and AllInputsAsInternalCoverter.
239example.DisableLifeTimeVariation()
240```
241
242You may also specify a certain operand to be input/const-only that `AllInputsAsInternalCoverter` will skip converting this operand.
243
244```Python
245# "hash" will be converted to a model input when applying AllTensorsAsInputsConverter,
246# but will be skipped when further applying AllInputsAsInternalCoverter.
247hash = Parameter("hash", "TENSOR_FLOAT32", "{1, 1}", [0.123]).ShouldNeverBeInternal()
248```
249
250#### Some helper functions
251
252The test generator provides several helper functions or shorthands to add commonly used group of variations.
253
254```Python
255# Each following group of statements are equivalent
256
257# DataTypeConverter
258example.AddVariations(DataTypeConverter().Identify({op1: "TENSOR_FLOAT16", ...}))
259example.AddVariations("float16")    # will apply to every TENSOR_FLOAT32 operands
260
261example.AddVariations(DataTypeConverter().Identify({op1: "TENSOR_INT32", ...}))
262example.AddVariations("int32")      # will apply to every TENSOR_FLOAT32 operands
263
264# DataLayoutConverter
265example.AddVariations(DataLayoutConverter("nchw").Identify(op_list))
266example.AddVariations(("nchw", op_list))
267example.AddNchw(*op_list)
268
269# AxisConverter
270# original axis and dim are deduced from the op_list
271example.AddVariations(*[AxisConverter(origin, t, dim).Identify(op_list) for t in targets])
272example.AddAxis(targets, *op_list)
273
274example.AddVariations(*[
275        AxisConverter(origin, t, dim).Identify(op_list) for t in range(dim)
276    ], includeDefault=False)
277example.AddAllPositiveAxis(*op_list)
278
279example.AddVariations(*[
280        AxisConverter(origin, t, dim).Identify(op_list) for t in range(-dim, dim)
281    ], includeDefault=False)
282example.AddAllAxis(*op_list)
283
284drop = list(range(dim))
285drop.pop(origin)
286example.AddVariations(*[
287    AxisConverter(origin, origin, dim, drop[0:(dim-i)]).Identify(op_list) for i in dims])
288example.AddDims(dims, *op_list)
289
290example.AddVariations(*[
291    AxisConverter(origin, origin, dim, drop[0:i]).Identify(op_list) for i in range(dim)])
292example.AddAllDims(dims, *op_list)
293
294example.AddVariations(*[
295        AxisConverter(origin, j, dim, range(i)).Identify(op_list) \
296                for i in range(dim) for j in range(i, dim)
297    ], includeDefault=False)
298example.AddAllDimsAndPositiveAxis(dims, *op_list)
299
300example.AddVariations(*[
301        AxisConverter(origin, k, dim, range(i)).Identify(op_list) \
302                for i in range(dim) for j in range(i, dim) for k in [j, j - dim]
303    ], includeDefault=False)
304example.AddAllDimsAndAxis(dims, *op_list)
305
306# RelaxedModeConverter
307example.Addvariations(RelaxedModeConverter(True))
308example.AddVariations("relaxed")
309example.AddRelaxed()
310
311# ActivationConverter
312example.AddVariations(ActivationConverter("relu").Identify(op_list))
313example.AddVariations(("relu", op_list))
314example.AddRelu(*op_list)
315
316example.AddVariations(
317    ActivationConverter("relu").Identify(op_list),
318    ActivationConverter("relu1").Identify(op_list),
319    ActivationConverter("relu6").Identify(op_list))
320example.AddVariations(
321    ("relu", op_list),
322    ("relu1", op_list),
323    ("relu6", op_list))
324example.AddAllActivations(*op_list)
325```
326
327#### Specifying SUBGRAPH conversions
328
329Converters that support nested control flow models accept the following syntax:
330
331```
332converter = DataTypeConverter().Identify({
333    ...
334    subgraphOperand: DataTypeConverter().Identify({
335        ...
336    }),
337    ...
338})
339```
340
341### Specifying the Model Version
342
343If not explicitly specified, the minimal required HAL version will be inferred from the path, e.g. the models defined in `nn/runtime/test/specs/V1_0/add.mod.py` will all have version `V1_0`. However there are several exceptions that a certain operation is under-tested in previous version and more tests are added in a later version. In this case, two methods are provided to set the version manually.
344
345#### Set the version when creating the model
346
347Use `IntroducedIn` to set the version of a model. All variations of the model will have the same version.
348
349```Python
350model_V1_0 = Model().IntroducedIn("V1_0")
351...
352# All variations of model_V1_0 will have the same version V1_0.
353Example(example, model=model_V1_0).AddVariations(var0, var1, ...)
354```
355
356#### Set the version overrides
357
358Use `Example.SetVersion` to override the model version for specific tests. The target tests are specified by names. This method can also override the version specified by `IntroducedIn`.
359
360```Python
361Example.SetVersion(<version>, testName0, testName1, ...)
362```
363
364This is useful when only a subset of variations has a different version.
365
366### Specifying model inputs and outputs
367
368Use `Model.IdentifyInputs` and `Model.IdentifyOutputs` to explicitly specify
369model inputs and outputs. This is particularly useful for models referenced by
370IF and WHILE operations.
371
372```Python
373DataType = ["TENSOR_INT32", [1]]
374BoolType = ["TENSOR_BOOL8", [1]]
375
376def MakeConditionModel():
377  a = Input("a", DataType)
378  b = Input("b", DataType)
379  out = Output("out", BoolType)
380  model = Model()
381  model.IdentifyInputs(a, b)  # "a" is unused by the model.
382  model.IdentifyOutputs(out)
383  model.Operation("LESS", b, [10]).To(out)
384  return model
385
386def MakeBodyModel():
387  a = Input("a", DataType)
388  b = Input("b", DataType)
389  a_out = Output("a_out", DataType)
390  b_out = Output("b_out", DataType)
391  model = Model()
392  model.IdentifyInputs(a, b)  # The order is the same as in the WHILE operation.
393  model.IdentifyOutputs(a_out, b_out)
394  model.Operation("SUB", b, a, 0).To(a_out)
395  model.Operation("ADD", b, [1], 0).To(b_out)
396  return model
397
398a = Input("a", DataType)
399a_out = Output("a_out", DataType)
400cond = MakeConditionModel()
401body = MakeBodyModel()
402b_init = [1]
403Model().Operation("WHILE", cond, body, a, b_init).To(a_out)
404```
405
406### Creating negative tests
407
408Negative test, also known as validation test, is a testing method that supplies invalid model or request, and expects the target framework or driver to fail gracefully. You can use `ExpectFailure` to tag a example as invalid.
409
410```Python
411Example.ExpectFailure()
412```
413
414### A Complete Example
415
416```Python
417# Declare input, output, and parameters
418i1 = Input("op1", ("TENSOR_FLOAT32", [1, 3, 4, 1]))
419f1 = Parameter("op2", ("TENSOR_FLOAT32", [1, 3, 3, 1]), [1, 4, 7, 2, 5, 8, 3, 6, 9])
420b1 = Parameter("op3", ("TENSOR_FLOAT32", [1]), [-200])
421act = Int32Scalar("act", 0)
422o1 = Output("op4", ("TENSOR_FLOAT32", [1, 3, 4, 1]))
423
424# Instantiate a model and add CONV_2D operation
425# Use implicit parameter for implicit padding and strides
426Model().Operation("CONV_2D", i1, f1, b1, 1, 1, 1, act, layout).To(o1)
427
428# Additional data type
429quant8 = DataTypeConverter().Identify({
430    i1: ("TENSOR_QUANT8_ASYMM", 0.5, 127),
431    f1: ("TENSOR_QUANT8_ASYMM", 0.5, 127),
432    b1: ("TENSOR_INT32", 0.25, 0),
433    o1: ("TENSOR_QUANT8_ASYMM", 1.0, 50)
434})
435
436# Instantiate an example
437example = Example({
438    i1: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
439    o1: [0, 0, 0, 0, 35, 112, 157, 0, 0, 34, 61, 0]
440})
441
442# Only use NCHW data layout
443example.AddNchw(i1, f1, o1, layout, includeDefault=False)
444
445# Add two more groups of variations
446example.AddInput(f1, b1).AddVariations("relaxed", quant8).AddAllActivations(o1, act)
447
448# The following variations are added implicitly.
449# example.AddVariations(AllTensorsAsInputsConverter())
450# example.AddVariations(AllInputsAsInternalCoverter())
451
452# The following variation is added implicitly if this test is introduced in v1.2 or later.
453# example.AddVariations(DynamicOutputShapeConverter())
454```
455
456The spec above will result in 96 tests if introduced in v1.0 or v1.1, and 192 tests if introduced in v1.2 or later.
457
458## Generate Tests
459
460Once you have your model ready, run
461
462```
463$ANDROID_BUILD_TOP/packages/modules/NeuralNetworks/runtime/test/specs/generate_all_tests.sh
464```
465
466It will update all CTS and VTS tests based on spec files in `nn/runtime/test/specs/V1_*/*`.
467
468Rebuild with mma afterwards.
469