Home
last modified time | relevance | path

Searched refs:Quantization (Results 1 – 25 of 33) sorted by relevance

12

/external/llvm-project/mlir/include/mlir/Dialect/Quant/
DQuantOpsBase.td1 //===- QuantOpsBase.td - Quantization dialect base ---------*- tablegen -*-===//
9 // Predicates for types in the Quantization dialect.
24 // Quantization type definitions
DQuantOps.td1 //===- QuantOps.td - Quantization operation definition -----*- tablegen -*-===//
9 // This is the operation definition file for Quantization.
27 // Quantization casts
139 // Quantization range starts from 0 or 1; starts from 1 if true.
172 // Quantization range starts from 0 or 1; starts from 1 if true.
/external/llvm-project/mlir/include/mlir/Dialect/Tosa/IR/
DTosaOpBase.td45 // TOSA Operator Quantization Attributes.
48 // Quantization attributes used across TOSA operators. Quantization attributes
53 // quantization support: https://mlir.llvm.org/docs/Quantization/, and supports
108 // TOSA Operator Quantization Builders.
/external/tensorflow/tensorflow/lite/g3doc/performance/
Dmodel_optimization.md35 Quantization can reduce the size of a model in all of these cases, potentially
81 ### Quantization subsection
83 [Quantization](https://www.tensorflow.org/model_optimization/guide/quantization/post_training)
95 [Quantization-aware training](http://www.tensorflow.org/model_optimization/guide/quantization/train…
108 <th>Top-1 Accuracy (Quantization Aware Training) </th>
111 <th>Latency (Quantization Aware Training) (ms) </th>
130 [Quantization with int16 activations](https://www.tensorflow.org/model_optimization/guide/quantizat…
Dnnapi.md121 ### Quantization subsection
123 Quantization reduces model size by using 8-bit integers or 16-bit floats instead
125 32-bit float versions; 16-bit floats are half of the size. Quantization can
Ddelegates.md95 [Quantization-aware training](http://www.tensorflow.org/model_optimization/guide/quantization/train…
Dgpu_advanced.md305 [Quantization-aware training](https://www.tensorflow.org/lite/convert/quantization)
/external/tensorflow/tensorflow/core/api_def/base_api/
Dapi_def_FakeQuantWithMinMaxVarsGradient.pbtxt13 min, max: Quantization interval, scalar floats.
Dapi_def_FakeQuantWithMinMaxVarsPerChannelGradient.pbtxt15 min, max: Quantization interval, floats of shape `[d]`.
Dapi_def_FakeQuantWithMinMaxArgs.pbtxt24 Quantization is called fake since the output is still in floating point.
/external/tensorflow/tensorflow/lite/experimental/quantization_debugger/
DREADME.md1 # TensorFlow Lite Quantization Debugger
14 This is now feasible using the TensorFlow Lite Quantization Debugger, as shown
/external/tensorflow/tensorflow/lite/c/
Dcommon_test.cc99 TEST(Quantization, TestQuantizationFree) { in TEST() argument
/external/libjpeg-turbo/
Dwizard.txt15 Quantization Table Adjustment
41 Quantization table files are free format, in that arbitrary whitespace can
46 # Quantization tables given in Annex K (Clause K.1) of
/external/tensorflow/tensorflow/lite/objc/tests/
DTFLInterpreterTests.m39 /** Quantization scale of the quantized model. */
42 /** Quantization zero point of the quantized model. */
/external/llvm-project/mlir/docs/
DQuantization.md1 # Quantization chapter
170 * *Quantization* dialect containing:
195 ## Quantization Dialect
/external/tensorflow/tensorflow/lite/g3doc/guide/
Droadmap.md74 * **Quantization**
Dfaq.md126 [Quantization-aware training](https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/contri…
Dget_started.md263 ### Quantization subsection
/external/libtextclassifier/native/lang_id/common/flatbuffers/
Dembedding-network.fbs58 // Quantization factors (float16), one per matrix row. There is no float16
/external/tensorflow/tensorflow/stream_executor/
Dstream.h93 struct Quantization;
711 gpu_unquantized_src, Quantization<ElementType>::kModeId, in ThenMemcpyD2HQuantized()
728 Quantization<ElementType>::kModeId, gpu_unquantized_dst); in ThenMemcpyH2DQuantized()
2155 struct Quantization<uint8> {
2161 struct Quantization<uint16> {
2167 struct Quantization<int32> {
/external/llvm-project/mlir/docs/Dialects/
DTOSA.md126 ### Quantization Parameters in Ops vs Tensors
/external/tensorflow/tensorflow/lite/g3doc/inference_with_metadata/
Dlite_support.md216 ## Quantization section in Process input and output data with the TensorFlow Lite Support Library
/external/pdfium/third_party/libopenjpeg20/
D0003-dwt-decode.patch39 Explicit calculation of the Quantization Stepsizes
/external/tensorflow/tensorflow/lite/java/ovic/
DREADME.md245 [Post-Training Quantization tutorial](https://www.tensorflow.org/lite/performance/post_training_qua…
/external/gemmlowp/doc/
Dquantization.md33 ## Quantization as an affine map.

12