Home
last modified time | relevance | path

Searched refs:unfold_batch_matmul (Results 1 – 6 of 6) sorted by relevance

/external/tensorflow/tensorflow/compiler/mlir/lite/common/
Dtfl_pass_config.h36 unfold_batch_matmul(true),
58 bool unfold_batch_matmul; member
/external/tensorflow/tensorflow/compiler/mlir/lite/python/
Dgraphdef_to_tfl_flatbuffer.cc95 pass_config.unfold_batch_matmul = false; in ConvertGraphDefToTFLiteFlatBuffer()
Dsaved_model_to_tfl_flatbuffer.cc182 pass_config.unfold_batch_matmul = false; in ConvertSavedModelToTFLiteFlatBuffer()
/external/tensorflow/tensorflow/compiler/mlir/lite/transforms/
Dpasses.h44 bool unfold_batch_matmul, bool allow_bf16_and_f16_type_legalization);
Dprepare_tf.cc86 explicit PrepareTFPass(bool unfold_batch_matmul, in PrepareTFPass() argument
88 unfold_batch_matmul_ = unfold_batch_matmul; in PrepareTFPass()
1495 bool unfold_batch_matmul, bool allow_bf16_type_legalization) { in CreatePrepareTFPass() argument
1496 return std::make_unique<PrepareTFPass>(unfold_batch_matmul, in CreatePrepareTFPass()
/external/tensorflow/tensorflow/compiler/mlir/lite/
Dtf_tfl_passes.cc195 pass_config.unfold_batch_matmul, in AddTFToTFLConversionPasses()