• Home
Name Date Size #Lines LOC

..--

README.mdD03-May-20241.6 KiB3127

add_with_const_input.binD03-May-2024576

argmax.binD03-May-2024528

broadcast_to.binD03-May-2024832

concat.binD03-May-2024496

custom_op.binD03-May-2024524

fc.binD03-May-20241.1 KiB

fc_qat.binD03-May-20249.4 KiB

gather_nd.binD03-May-2024736

lstm_calibrated.binD03-May-20242.8 KiB

lstm_calibrated2.binD03-May-20242.7 KiB

lstm_quantized.binD03-May-20244.2 KiB

lstm_quantized2.binD03-May-20244.1 KiB

maximum.binD03-May-2024532

minimum.binD03-May-2024532

mixed.binD03-May-20241.1 KiB

mixed16x8.binD03-May-20241.2 KiB

multi_input_add_reshape.binD03-May-2024804

pack.binD03-May-2024732

quantized_with_gather.binD03-May-2024768

resource_vars_calibrated.binD03-May-20242.5 KiB

single_avg_pool_min_minus_5_max_plus_5.binD03-May-2024508

single_conv_no_bias.binD03-May-20241.2 KiB

single_conv_weights_min_0_max_plus_10.binD03-May-20241.3 KiB

single_conv_weights_min_minus_127_max_plus_127.binD03-May-20241.3 KiB

single_softmax_min_minus_5_max_plus_5.binD03-May-2024464

split.binD03-May-2024920

svdf_calibrated.binD03-May-20241,016

svdf_quantized.binD03-May-20241.1 KiB

transpose.binD03-May-2024544

unidirectional_sequence_lstm_calibrated.binD03-May-20243 KiB

unidirectional_sequence_lstm_quantized.binD03-May-20244.3 KiB

unpack.binD03-May-2024616

weight_shared_between_convs.binD03-May-2024948

where.binD03-May-2024544

README.md

1# Test models for testing quantization
2
3This directory contains test models for testing quantization.
4
5## Models
6
7* `single_conv_weights_min_0_max_plus_10.bin` \
8   A floating point model with single convolution where all weights are
9   integers between [0, 10] weights are randomly distributed. It is not
10   guaranteed that min max for weights are going to appear in each channel.
11   All activations have min maxes and activations are in range [0,10].
12* `single_conv_weights_min_minus_127_max_plus_127.bin` \
13   A floating point model with a single convolution where weights of the model
14   are all integers that lie in range[-127, 127]. The weights have been put in
15   such a way that each channel has at least one weight as -127 and one weight
16   as 127. The activations are all in range: [-128, 127].
17   This means all bias computations should result in 1.0 scale.
18* `single_softmax_min_minus_5_max_5.bin` \
19   A floating point model with a single softmax. The input tensor has min
20   and max in range [-5, 5], not necessarily -5 or +5.
21* `single_avg_pool_input_min_minus_5_max_5.bin` \
22   A floating point model with a single average pool. The input tensor has min
23   and max in range [-5, 5], not necessarily -5 or +5.
24* `weight_shared_between_convs.bin` \
25   A floating point model with two convs that have a use the same weight tensor.
26* `multi_input_add_reshape.bin` \
27   A floating point model with two inputs with an add followed by a reshape.
28* `quantized_with_gather.bin` \
29   A floating point model with an input with a gather, modeling a situation
30   of mapping categorical input to embeddings.
31