Searched refs:per_tensor_quantized (Results 1 – 2 of 2) sorted by relevance
63 bool per_tensor_quantized = true) { in conv2d_nchw_core_generic()138 (per_tensor_quantized ? bias_scale[0] : bias_scale[_oc]) * in conv2d_nchw_core_generic()191 bool per_tensor_quantized = bias_scale.numel() == 1; in quantized_conv_out() local221 per_tensor_quantized); in quantized_conv_out()250 per_tensor_quantized); in quantized_conv_out()
918 per_tensor_quantized = torch._empty_affine_quantized(929 qtensors = [per_tensor_quantized, per_channel_quantized]