/external/tensorflow/tensorflow/contrib/lite/toco/tflite/ |
D | import.cc | 73 auto quantization = input_tensor->quantization(); in ImportTensors() local 74 if (quantization) { in ImportTensors() 77 if (quantization->min() && quantization->max()) { in ImportTensors() 78 CHECK_EQ(1, quantization->min()->Length()); in ImportTensors() 79 CHECK_EQ(1, quantization->max()->Length()); in ImportTensors() 81 minmax.min = quantization->min()->Get(0); in ImportTensors() 82 minmax.max = quantization->max()->Get(0); in ImportTensors() 84 if (quantization->scale() && quantization->zero_point()) { in ImportTensors() 85 CHECK_EQ(1, quantization->scale()->Length()); in ImportTensors() 86 CHECK_EQ(1, quantization->zero_point()->Length()); in ImportTensors() [all …]
|
/external/gemmlowp/doc/ |
D | public.md | 15 rationale for a specific quantization paradigm is given in 16 [quantization.md](quantization.md). That specific quantization paradigm is 97 quantization paradigm explained in [quantization.md](quantization.md) that 102 pipeline (see [output.md](output.md)). This is the part of the quantization 103 paradigm explained in [quantization.md](quantization.md) that needs to be 146 quantization", whence the PC suffix. This has been useful in some settings where 149 the need for per-channel quantization. For that reason, the long-term usefulness 155 section of [low-precision.md](low-precision.md) on the legacy quantization
|
D | quantization.md | 1 # Building a quantization paradigm from first principles 12 quantization paradigm affects the calculations that gemmlowp itself needs to 25 naturally at some specific quantization paradigm, and how that can be 28 We also aim to show how that differs from the older, legacy quantization 30 newer quantization paradigm described in this document was useful as far as some 75 ## The final form of the quantization equation 78 representable — means in either quantization equations, (1) and (2). 111 In other words, `D = -zero_point`. This suggests rewriting the quantization 122 With this quantization equation (3), the condition that 0 be exactly 148 above equation (3), with some already-known quantization parameters `lhs_scale`, [all …]
|
D | output.md | 10 quantization paradigms. See [low-precision.md](low-precision.md) and 11 [quantization.md](quantization.md). 13 Besides implementing a quantization paradigm, the other thing that output 51 quantized matrix multiplication with a sounds quantization paradigm, is here:
|
D | low-precision.md | 31 Refer to [quantization.md](quantization.md) for details of how one gets from 62 matrix of uint8 quantized values - the following int32 "quantization 86 [quantization.md](quantization.md) for how reasoning from first principles, one 87 arrives to a substantially different quantization paradigm.
|
/external/tensorflow/tensorflow/contrib/quantization/python/ |
D | __init__.py | 22 from tensorflow.contrib.quantization.python.array_ops import * 23 from tensorflow.contrib.quantization.python.math_ops import * 24 from tensorflow.contrib.quantization.python.nn_ops import *
|
/external/tensorflow/tensorflow/contrib/quantization/ |
D | __init__.py | 23 from tensorflow.contrib.quantization.python import array_ops as quantized_array_ops 24 from tensorflow.contrib.quantization.python.math_ops import * 25 from tensorflow.contrib.quantization.python.nn_ops import *
|
/external/tensorflow/tensorflow/core/api_def/base_api/ |
D | api_def_QuantizeAndDequantizeV2.pbtxt | 26 If the quantization is signed or unsigned. 32 The bitwidth of the quantization. 45 quantization method when it is used in inference. 52 quantization), so that 0.0 maps to 0. 62 Next, we choose our fixed-point quantization buckets, [min_fixed, max_fixed].
|
D | api_def_FakeQuantWithMinMaxArgs.pbtxt | 6 `inputs` values are quantized into the quantization range (`[0; 2^num_bits - 1]` 9 `num_bits` is the bitwidth of the quantization; between 2 and 8, inclusive.
|
D | api_def_FakeQuantWithMinMaxVars.pbtxt | 8 `inputs` values are quantized into the quantization range (`[0; 2^num_bits - 1]` 11 `num_bits` is the bitwidth of the quantization; between 2 and 8, inclusive.
|
D | api_def_FakeQuantWithMinMaxVarsPerChannel.pbtxt | 9 `inputs` values are quantized into the quantization range (`[0; 2^num_bits - 1]` 12 `num_bits` is the bitwidth of the quantization; between 2 and 8, inclusive.
|
D | api_def_Dequantize.pbtxt | 52 `SCALED` mode matches the quantization approach used in 57 -127 to 127, not -128 to 127 for signed 8 bit quantization), so that 0.0 maps to 68 Next, we choose our fixed-point quantization buckets, `[min_fixed, max_fixed]`.
|
D | api_def_QuantizeV2.pbtxt | 80 `SCALED` mode matches the quantization approach used in 85 -127 to 127, not -128 to 127 for signed 8 bit quantization), so that 0.0 maps to 96 Next, we choose our fixed-point quantization buckets, `[min_fixed, max_fixed]`. 120 requested minimum and maximum values slightly during the quantization process,
|
/external/webp/src/enc/ |
D | predictor_enc.c | 151 uint8_t boundary, int quantization) { in NearLosslessComponent() argument 154 const int lower = residual & ~(quantization - 1); in NearLosslessComponent() 155 const int upper = lower + quantization; in NearLosslessComponent() 165 return lower + (quantization >> 1); in NearLosslessComponent() 174 return lower + (quantization >> 1); in NearLosslessComponent() 189 int quantization; in NearLossless() local 196 quantization = max_quantization; in NearLossless() 197 while (quantization >= max_diff) { in NearLossless() 198 quantization >>= 1; in NearLossless() 204 a = NearLosslessComponent(value >> 24, predict >> 24, 0xff, quantization); in NearLossless() [all …]
|
/external/libjpeg-turbo/ |
D | wizard.txt | 21 jcparam.c. At very low quality settings, some quantization table entries 22 can get scaled up to values exceeding 255. Although 2-byte quantization 26 quantization values to no more than 255 by giving the -baseline switch. 30 You can substitute a different set of quantization values by using the 33 -qtables file Use the quantization tables given in the named file. 35 The specified file should be a text file containing decimal quantization 44 duplicates the default quantization tables: 72 the quantization values are constrained to the range 1-255. 74 By default, cjpeg will use quantization table 0 for luminance components and 78 -qslots N[,...] Select which quantization table to use for [all …]
|
D | usage.txt | 73 -quality N[,...] Scale quantization tables to adjust image quality. 113 -quality 100 will generate a quantization table of all 1's, minimizing loss 114 in the quantization step (but there is still information loss in subsampling, 124 quantization tables, which are considered optional in the JPEG standard. 131 separate settings for every quantization table slot.) The principle is the 143 quantization table slots. If there are more q-table slots than parameters, 147 customized) quantization tables can be set with the -qtables option and 236 -baseline Force baseline-compatible quantization tables to be 237 generated. This clamps quantization values to 8 bits 243 -qtables file Use the quantization tables given in the specified [all …]
|
/external/tensorflow/tensorflow/contrib/lite/toco/ |
D | toco_flags.proto | 48 // quantization of input arrays, separately from other arrays. 60 // the uint8 values are interpreted as real numbers, and the quantization 66 // the representation (quantization) of real numbers in the output file, 74 // file to choose a different real-numbers representation (quantization) 91 // with quantization of models. Normally, quantization requires the input 99 // allowing for quantization to proceed. 109 // generate plain float code without fake-quantization from a quantized
|
/external/tensorflow/tensorflow/contrib/lite/toco/g3doc/ |
D | cmdline_reference.md | 52 control their quantization or dequantization, effectively switching 62 * Some transformation flags allow to carry on with quantization when the 131 the (de-)quantization parameters of the input array, when it is quantized. 143 performed by the inference code; however, the quantization parameters of 166 output file, that is, controls the representation (quantization) of real 175 to choose a different real-numbers representation (quantization) from what 191 allows to control specifically the quantization of input arrays, separately 204 are interpreted as real numbers, and the quantization parameters used for 209 These flags enable what is called "dummy quantization". If defined, their 212 allowing to proceed with quantization of non-quantized or [all …]
|
/external/tensorflow/tensorflow/contrib/quantize/ |
D | README.md | 2 model quantization of weights, biases and activations during both training and 4 [fake quantization op] 10 quantization operation during training in both the forward and backward passes. 11 The fake quantization operator achieves this by modeling the quantizer as a pass
|
/external/tensorflow/tensorflow/contrib/lite/ |
D | model.cc | 638 TfLiteQuantizationParams quantization; in ParseTensors() local 639 quantization.scale = 0; in ParseTensors() 640 quantization.zero_point = 0; in ParseTensors() 641 auto* q_params = tensor->quantization(); in ParseTensors() 647 if (q_params->scale()) quantization.scale = q_params->scale()->Get(0); in ParseTensors() 649 quantization.zero_point = q_params->zero_point()->Get(0); in ParseTensors() 706 i, type, get_name(tensor), dims, quantization, buffer_ptr, in ParseTensors() 714 i, type, get_name(tensor), dims, quantization) != kTfLiteOk) { in ParseTensors()
|
D | context.c | 68 TfLiteQuantizationParams quantization, char* buffer, in TfLiteTensorReset() argument 75 tensor->params = quantization; in TfLiteTensorReset()
|
/external/ImageMagick/config/ |
D | Makefile.am | 42 config/quantization-table.xml \ 66 config/quantization-table.xml \
|
/external/glide/third_party/gif_encoder/ |
D | LICENSE | 13 NEUQUANT Neural-Net quantization algorithm by Anthony Dekker, 1994. See 14 "Kohonen neural networks for optimal colour quantization" in "Network:
|
/external/tensorflow/tensorflow/docs_src/api_guides/python/ |
D | array_ops.md | 79 ## Fake quantization 80 Operations used to help train for better quantization accuracy.
|
/external/tensorflow/tensorflow/docs_src/performance/ |
D | index.md | 41 * @{$quantization$How to Quantize Neural Networks with TensorFlow}, which 42 can explains how to use quantization to reduce model size, both in storage
|