Home
last modified time | relevance | path

Searched refs:QuantizedTensorToFloat (Results 1 – 15 of 15) sorted by relevance

/external/tensorflow/tensorflow/core/kernels/
Dquantized_activation_ops_test.cc65 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TEST_F()
96 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TEST_F()
Dquantized_pooling_ops_test.cc79 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TEST_F()
124 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TEST_F()
Dquantized_bias_add_op_test.cc86 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
168 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
Dquantized_concat_op_test.cc114 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TestSmall8Bit()
180 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TestSmall32Bit()
242 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TestSecondDim8Bit()
Dquantized_batch_norm_op_test.cc129 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
236 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
Dquantized_conv_ops_test.cc130 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
321 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
Dquantized_matmul_op_test.cc354 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
Dquantized_mul_op_test.cc81 Tensor z_float = QuantizedTensorToFloat<qint32>(z_quantized, z_min, z_max); in TestMul()
Dquantized_add_op_test.cc81 Tensor z_float = QuantizedTensorToFloat<qint32>(z_quantized, z_min, z_max); in TestAdd()
Dquantization_utils_test.cc685 Tensor output = QuantizedTensorToFloat<quint8>(input, input_min, input_max); in TestQuantizedTensorToFloat()
706 Tensor output32 = QuantizedTensorToFloat<qint32>( in TestQuantizedTensorToFloat()
Dquantization_utils.h799 Tensor QuantizedTensorToFloat(const Tensor& input, float min, float max) { in QuantizedTensorToFloat() function
/external/tensorflow/tensorflow/core/kernels/mkl/
Dmkl_quantized_pooling_ops_test.cc128 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TEST_F()
196 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TEST_F()
Dmkl_quantized_concat_op_test.cc157 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TestSmall8Bit()
231 QuantizedTensorToFloat<quint8>(output_quantized, output_min, output_max); in TestSecondDim8Bit()
Dmkl_quantized_conv_ops_perchannel_test.cc177 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
Dmkl_quantized_conv_ops_test.cc283 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()
387 QuantizedTensorToFloat<qint32>(output_quantized, output_min, output_max); in TEST_F()