Home
last modified time | relevance | path

Searched refs:SetAllowFp16PrecisionForFp32 (Results 1 – 7 of 7) sorted by relevance

/external/tensorflow/tensorflow/lite/
Dinterpreter_experimental.cc58 void Interpreter::SetAllowFp16PrecisionForFp32(bool allow) { in SetAllowFp16PrecisionForFp32() function in tflite::Interpreter
Dinterpreter.h508 void SetAllowFp16PrecisionForFp32(bool allow);
/external/tensorflow/tensorflow/lite/kernels/
Dtest_util.cc254 interpreter_->SetAllowFp16PrecisionForFp32(allow_fp32_relax_to_fp16); in BuildInterpreter()
/external/tensorflow/tensorflow/lite/examples/label_image/
Dlabel_image.cc226 interpreter->SetAllowFp16PrecisionForFp32(settings->allow_fp16); in RunInference()
/external/tensorflow/tensorflow/lite/java/src/main/native/
Dnativeinterpreterwrapper_jni.cc308 interpreter->SetAllowFp16PrecisionForFp32(static_cast<bool>(allow)); in Java_org_tensorflow_lite_NativeInterpreterWrapper_allowFp16PrecisionForFp32()
/external/tensorflow/tensorflow/lite/tools/benchmark/
Dbenchmark_tflite_model.cc715 interpreter_->SetAllowFp16PrecisionForFp32(params_.Get<bool>("allow_fp16")); in Init()
/external/tensorflow/
DRELEASE.md4818 * Deprecates `Interpreter::SetAllowFp16PrecisionForFp32(bool)` C++ API.