/external/tensorflow/tensorflow/lite/g3doc/performance/ |
D | model_optimization.md | 38 computation. Quantization provides several benefits: 41 * Quantization of activations reduces memory access costs for reading and storing intermediate acti… 47 * [Quantization-aware training](https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/cont… 61 <th>Top-1 Accuracy (Quantization Aware Training) </th> 64 <th>Latency (Quantization Aware Training) (ms) </th> 89 Note: Quantization-aware training supports a subset of convolutional neural network architectures.
|
D | best_practices.md | 49 ### Quantization subsection
|
/external/tensorflow/tensorflow/contrib/quantize/ |
D | README.md | 1 # Quantization-aware training 3 Quantization-aware model training ensures that the forward pass matches precision 7 * Quantization effects at inference are modeled at training time. 157 [1] B.Jacob et al., "Quantization and Training of Neural Networks for Efficient
|
/external/tensorflow/tensorflow/core/api_def/base_api/ |
D | api_def_FakeQuantWithMinMaxArgs.pbtxt | 11 Quantization is called fake since the output is still in floating point.
|
D | api_def_FakeQuantWithMinMaxVarsGradient.pbtxt | 13 min, max: Quantization interval, scalar floats.
|
D | api_def_FakeQuantWithMinMaxVarsPerChannelGradient.pbtxt | 15 min, max: Quantization interval, floats of shape `[d]`.
|
/external/tensorflow/tensorflow/lite/c/ |
D | c_api_internal_test.cc | 91 TEST(Quantization, TestQuantizationFree) { in TEST() argument
|
/external/libjpeg-turbo/ |
D | wizard.txt | 15 Quantization Table Adjustment 41 Quantization table files are free format, in that arbitrary whitespace can 46 # Quantization tables given in Annex K (Clause K.1) of
|
D | libjpeg.txt | 1149 Quantization table number for component. The default value is
|
/external/tensorflow/tensorflow/lite/experimental/objc/tests/ |
D | TFLInterpreterTests.m | 40 /** Quantization scale of the quantized model. */ 43 /** Quantization zero point of the quantized model. */
|
/external/libtextclassifier/lang_id/common/flatbuffers/ |
D | embedding-network.fbs | 58 // Quantization factors (float16), one per matrix row. There is no float16
|
/external/tensorflow/tensorflow/stream_executor/ |
D | stream.h | 79 struct Quantization; 688 gpu_unquantized_src, Quantization<ElementType>::kModeId, in ThenMemcpyD2HQuantized() 705 Quantization<ElementType>::kModeId, gpu_unquantized_dst); in ThenMemcpyH2DQuantized() 2049 struct Quantization<uint8> { 2055 struct Quantization<uint16> { 2061 struct Quantization<int32> {
|
/external/tensorflow/tensorflow/lite/g3doc/guide/ |
D | faq.md | 103 [Quantization-aware training](https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/contri…
|
D | get_started.md | 233 ### Quantization subsection
|
/external/tensorflow/tensorflow/lite/g3doc/r2/convert/ |
D | python_api.md | 166 ### Quantization-aware training
|
/external/pdfium/third_party/libopenjpeg20/ |
D | 0003-dwt-decode.patch | 40 Explicit calculation of the Quantization Stepsizes
|
/external/tensorflow/tensorflow/lite/tutorials/ |
D | post_training_quant.ipynb | 10 "# Post Training Quantization"
|
/external/gemmlowp/doc/ |
D | quantization.md | 33 ## Quantization as an affine map.
|
/external/tensorflow/tensorflow/lite/g3doc/convert/ |
D | cmdline_examples.md | 92 ## Quantization section in Converter command line examples
|
/external/tensorflow/tensorflow/python/ |
D | BUILD | 5577 # Quantization
|
/external/tensorflow/tensorflow/core/kernels/ |
D | BUILD | 6025 # Quantization-specific OpKernels
|