Home
last modified time | relevance | path

Searched refs:Quantization (Results 1 – 21 of 21) sorted by relevance

/external/tensorflow/tensorflow/lite/g3doc/performance/
Dmodel_optimization.md38 computation. Quantization provides several benefits:
41 * Quantization of activations reduces memory access costs for reading and storing intermediate acti…
47 * [Quantization-aware training](https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/cont…
61 <th>Top-1 Accuracy (Quantization Aware Training) </th>
64 <th>Latency (Quantization Aware Training) (ms) </th>
89 Note: Quantization-aware training supports a subset of convolutional neural network architectures.
Dbest_practices.md49 ### Quantization subsection
/external/tensorflow/tensorflow/contrib/quantize/
DREADME.md1 # Quantization-aware training
3 Quantization-aware model training ensures that the forward pass matches precision
7 * Quantization effects at inference are modeled at training time.
157 [1] B.Jacob et al., "Quantization and Training of Neural Networks for Efficient
/external/tensorflow/tensorflow/core/api_def/base_api/
Dapi_def_FakeQuantWithMinMaxArgs.pbtxt11 Quantization is called fake since the output is still in floating point.
Dapi_def_FakeQuantWithMinMaxVarsGradient.pbtxt13 min, max: Quantization interval, scalar floats.
Dapi_def_FakeQuantWithMinMaxVarsPerChannelGradient.pbtxt15 min, max: Quantization interval, floats of shape `[d]`.
/external/tensorflow/tensorflow/lite/c/
Dc_api_internal_test.cc91 TEST(Quantization, TestQuantizationFree) { in TEST() argument
/external/libjpeg-turbo/
Dwizard.txt15 Quantization Table Adjustment
41 Quantization table files are free format, in that arbitrary whitespace can
46 # Quantization tables given in Annex K (Clause K.1) of
Dlibjpeg.txt1149 Quantization table number for component. The default value is
/external/tensorflow/tensorflow/lite/experimental/objc/tests/
DTFLInterpreterTests.m40 /** Quantization scale of the quantized model. */
43 /** Quantization zero point of the quantized model. */
/external/libtextclassifier/lang_id/common/flatbuffers/
Dembedding-network.fbs58 // Quantization factors (float16), one per matrix row. There is no float16
/external/tensorflow/tensorflow/stream_executor/
Dstream.h79 struct Quantization;
688 gpu_unquantized_src, Quantization<ElementType>::kModeId, in ThenMemcpyD2HQuantized()
705 Quantization<ElementType>::kModeId, gpu_unquantized_dst); in ThenMemcpyH2DQuantized()
2049 struct Quantization<uint8> {
2055 struct Quantization<uint16> {
2061 struct Quantization<int32> {
/external/tensorflow/tensorflow/lite/g3doc/guide/
Dfaq.md103 [Quantization-aware training](https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/contri…
Dget_started.md233 ### Quantization subsection
/external/tensorflow/tensorflow/lite/g3doc/r2/convert/
Dpython_api.md166 ### Quantization-aware training
/external/pdfium/third_party/libopenjpeg20/
D0003-dwt-decode.patch40 Explicit calculation of the Quantization Stepsizes
/external/tensorflow/tensorflow/lite/tutorials/
Dpost_training_quant.ipynb10 "# Post Training Quantization"
/external/gemmlowp/doc/
Dquantization.md33 ## Quantization as an affine map.
/external/tensorflow/tensorflow/lite/g3doc/convert/
Dcmdline_examples.md92 ## Quantization section in Converter command line examples
/external/tensorflow/tensorflow/python/
DBUILD5577 # Quantization
/external/tensorflow/tensorflow/core/kernels/
DBUILD6025 # Quantization-specific OpKernels