/external/llvm-project/mlir/include/mlir/Dialect/Quant/ |
D | QuantOpsBase.td | 1 //===- QuantOpsBase.td - Quantization dialect base ---------*- tablegen -*-===// 9 // Predicates for types in the Quantization dialect. 24 // Quantization type definitions
|
D | QuantOps.td | 1 //===- QuantOps.td - Quantization operation definition -----*- tablegen -*-===// 9 // This is the operation definition file for Quantization. 27 // Quantization casts 139 // Quantization range starts from 0 or 1; starts from 1 if true. 172 // Quantization range starts from 0 or 1; starts from 1 if true.
|
/external/llvm-project/mlir/include/mlir/Dialect/Tosa/IR/ |
D | TosaOpBase.td | 45 // TOSA Operator Quantization Attributes. 48 // Quantization attributes used across TOSA operators. Quantization attributes 53 // quantization support: https://mlir.llvm.org/docs/Quantization/, and supports 108 // TOSA Operator Quantization Builders.
|
/external/tensorflow/tensorflow/lite/g3doc/performance/ |
D | model_optimization.md | 35 Quantization can reduce the size of a model in all of these cases, potentially 81 ### Quantization subsection 83 [Quantization](https://www.tensorflow.org/model_optimization/guide/quantization/post_training) 95 [Quantization-aware training](http://www.tensorflow.org/model_optimization/guide/quantization/train… 108 <th>Top-1 Accuracy (Quantization Aware Training) </th> 111 <th>Latency (Quantization Aware Training) (ms) </th> 130 [Quantization with int16 activations](https://www.tensorflow.org/model_optimization/guide/quantizat…
|
D | nnapi.md | 121 ### Quantization subsection 123 Quantization reduces model size by using 8-bit integers or 16-bit floats instead 125 32-bit float versions; 16-bit floats are half of the size. Quantization can
|
D | delegates.md | 95 [Quantization-aware training](http://www.tensorflow.org/model_optimization/guide/quantization/train…
|
D | gpu_advanced.md | 305 [Quantization-aware training](https://www.tensorflow.org/lite/convert/quantization)
|
/external/tensorflow/tensorflow/core/api_def/base_api/ |
D | api_def_FakeQuantWithMinMaxVarsGradient.pbtxt | 13 min, max: Quantization interval, scalar floats.
|
D | api_def_FakeQuantWithMinMaxVarsPerChannelGradient.pbtxt | 15 min, max: Quantization interval, floats of shape `[d]`.
|
D | api_def_FakeQuantWithMinMaxArgs.pbtxt | 24 Quantization is called fake since the output is still in floating point.
|
/external/tensorflow/tensorflow/lite/experimental/quantization_debugger/ |
D | README.md | 1 # TensorFlow Lite Quantization Debugger 14 This is now feasible using the TensorFlow Lite Quantization Debugger, as shown
|
/external/tensorflow/tensorflow/lite/c/ |
D | common_test.cc | 99 TEST(Quantization, TestQuantizationFree) { in TEST() argument
|
/external/libjpeg-turbo/ |
D | wizard.txt | 15 Quantization Table Adjustment 41 Quantization table files are free format, in that arbitrary whitespace can 46 # Quantization tables given in Annex K (Clause K.1) of
|
/external/tensorflow/tensorflow/lite/objc/tests/ |
D | TFLInterpreterTests.m | 39 /** Quantization scale of the quantized model. */ 42 /** Quantization zero point of the quantized model. */
|
/external/llvm-project/mlir/docs/ |
D | Quantization.md | 1 # Quantization chapter 170 * *Quantization* dialect containing: 195 ## Quantization Dialect
|
/external/tensorflow/tensorflow/lite/g3doc/guide/ |
D | roadmap.md | 74 * **Quantization**
|
D | faq.md | 126 [Quantization-aware training](https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/contri…
|
D | get_started.md | 263 ### Quantization subsection
|
/external/libtextclassifier/native/lang_id/common/flatbuffers/ |
D | embedding-network.fbs | 58 // Quantization factors (float16), one per matrix row. There is no float16
|
/external/tensorflow/tensorflow/stream_executor/ |
D | stream.h | 93 struct Quantization; 711 gpu_unquantized_src, Quantization<ElementType>::kModeId, in ThenMemcpyD2HQuantized() 728 Quantization<ElementType>::kModeId, gpu_unquantized_dst); in ThenMemcpyH2DQuantized() 2155 struct Quantization<uint8> { 2161 struct Quantization<uint16> { 2167 struct Quantization<int32> {
|
/external/llvm-project/mlir/docs/Dialects/ |
D | TOSA.md | 126 ### Quantization Parameters in Ops vs Tensors
|
/external/tensorflow/tensorflow/lite/g3doc/inference_with_metadata/ |
D | lite_support.md | 216 ## Quantization section in Process input and output data with the TensorFlow Lite Support Library
|
/external/pdfium/third_party/libopenjpeg20/ |
D | 0003-dwt-decode.patch | 39 Explicit calculation of the Quantization Stepsizes
|
/external/tensorflow/tensorflow/lite/java/ovic/ |
D | README.md | 245 [Post-Training Quantization tutorial](https://www.tensorflow.org/lite/performance/post_training_qua…
|
/external/gemmlowp/doc/ |
D | quantization.md | 33 ## Quantization as an affine map.
|