Home
last modified time | relevance | path

Searched refs:QuantizedTensorToFloat (Results 1 – 14 of 14) sorted by relevance

/external/tensorflow/tensorflow/core/kernels/
Dquantized_activation_ops_test.cc65 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TEST_F()
96 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TEST_F()
Dquantized_pooling_ops_test.cc79 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TEST_F()
124 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TEST_F()
Dquantized_bias_add_op_test.cc86 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
168 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
Dquantized_concat_op_test.cc114 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TestSmall8Bit()
180 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TestSmall32Bit()
242 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TestSecondDim8Bit()
Dmkl_quantized_pooling_ops_test.cc128 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TEST_F()
196 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TEST_F()
Dmkl_quantized_concat_op_test.cc153 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TestSmall8Bit()
227 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TestSecondDim8Bit()
Dquantized_conv_ops_test.cc130 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
321 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
Dquantized_batch_norm_op_test.cc130 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
238 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
Dquantized_matmul_op_test.cc354 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
Dquantized_mul_op_test.cc81 Tensor z_float = QuantizedTensorToFloat<qint32>(z_quantized, z_min, z_max); in TestMul()
Dquantized_add_op_test.cc81 Tensor z_float = QuantizedTensorToFloat<qint32>(z_quantized, z_min, z_max); in TestAdd()
Dmkl_quantized_conv_ops_test.cc201 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
Dquantization_utils_test.cc685 Tensor output = QuantizedTensorToFloat<quint8>(input, input_min, input_max); in TestQuantizedTensorToFloat()
706 Tensor output32 = QuantizedTensorToFloat<qint32>( in TestQuantizedTensorToFloat()
Dquantization_utils.h790 Tensor QuantizedTensorToFloat(const Tensor& input, float min, float max) { in QuantizedTensorToFloat() function