• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Frequently Asked Questions
2
3If you don't find an answer to your question here, please look through our
4detailed documentation for the topic or file a
5[GitHub issue](https://github.com/tensorflow/tensorflow/issues).
6
7## Model Conversion
8
9#### What formats are supported for conversion from TensorFlow to TensorFlow Lite?
10
11The supported formats are listed [here](../models/convert/index#python_api)
12
13#### Why are some operations not implemented in TensorFlow Lite?
14
15In order to keep TFLite lightweight, only certain TF operators (listed in the
16[allowlist](op_select_allowlist)) are supported in TFLite.
17
18#### Why doesn't my model convert?
19
20Since the number of TensorFlow Lite operations is smaller than TensorFlow's,
21some models may not be able to convert. Some common errors are listed
22[here](../models/convert/index#conversion-errors).
23
24For conversion issues not related to missing operations or control flow ops,
25search our
26[GitHub issues](https://github.com/tensorflow/tensorflow/issues?q=label%3Acomp%3Alite+)
27or file a [new one](https://github.com/tensorflow/tensorflow/issues).
28
29#### How do I test that a TensorFlow Lite model behaves the same as the original TensorFlow model?
30
31The best way to test is to compare the outputs of the TensorFlow and the
32TensorFlow Lite models for the same inputs (test data or random inputs) as shown
33[here](inference#load-and-run-a-model-in-python).
34
35#### How do I determine the inputs/outputs for GraphDef protocol buffer?
36
37The easiest way to inspect a graph from a `.pb` file is to use
38[Netron](https://github.com/lutzroeder/netron), an open-source viewer for
39machine learning models.
40
41If Netron cannot open the graph, you can try the
42[summarize_graph](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/README.md#inspecting-graphs)
43tool.
44
45If the summarize_graph tool yields an error, you can visualize the GraphDef with
46[TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard) and
47look for the inputs and outputs in the graph. To visualize a `.pb` file, use the
48[`import_pb_to_tensorboard.py`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/import_pb_to_tensorboard.py)
49script like below:
50
51```shell
52python import_pb_to_tensorboard.py --model_dir <model path> --log_dir <log dir path>
53```
54
55#### How do I inspect a `.tflite` file?
56
57[Netron](https://github.com/lutzroeder/netron) is the easiest way to visualize a
58TensorFlow Lite model.
59
60If Netron cannot open your TensorFlow Lite model, you can try the
61[visualize.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/visualize.py)
62script in our repository.
63
64If you're using TF 2.5 or a later version
65
66```shell
67python -m tensorflow.lite.tools.visualize model.tflite visualized_model.html
68```
69
70Otherwise, you can run this script with Bazel
71
72*   [Clone the TensorFlow repository](https://www.tensorflow.org/install/source)
73*   Run the `visualize.py` script with bazel:
74
75```shell
76bazel run //tensorflow/lite/tools:visualize model.tflite visualized_model.html
77```
78
79## Optimization
80
81#### How do I reduce the size of my converted TensorFlow Lite model?
82
83[Post-training quantization](../performance/post_training_quantization) can
84be used during conversion to TensorFlow Lite to reduce the size of the model.
85Post-training quantization quantizes weights to 8-bits of precision from
86floating-point and dequantizes them during runtime to perform floating point
87computations. However, note that this could have some accuracy implications.
88
89If retraining the model is an option, consider
90[Quantization-aware training](https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/contrib/quantize).
91However, note that quantization-aware training is only available for a subset of
92convolutional neural network architectures.
93
94For a deeper understanding of different optimization methods, look at
95[Model optimization](../performance/model_optimization).
96
97#### How do I optimize TensorFlow Lite performance for my machine learning task?
98
99The high-level process to optimize TensorFlow Lite performance looks something
100like this:
101
102*   *Make sure that you have the right model for the task.* For image
103    classification, check out the
104    [TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite&module-type=image-classification).
105*   *Tweak the number of threads.* Many TensorFlow Lite operators support
106    multi-threaded kernels. You can use `SetNumThreads()` in the
107    [C++ API](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/interpreter.h#L345)
108    to do this. However, increasing threads results in performance variability
109    depending on the environment.
110*   *Use Hardware Accelerators.* TensorFlow Lite supports model acceleration for
111    specific hardware using delegates. See our
112    [Delegates](../performance/delegates) guide for information on what
113    accelerators are supported and how to use them with your model on-device.
114*   *(Advanced) Profile Model.* The Tensorflow Lite
115    [benchmarking tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark)
116    has a built-in profiler that can show per-operator statistics. If you know
117    how you can optimize an operator’s performance for your specific platform,
118    you can implement a [custom operator](ops_custom).
119
120For a more in-depth discussion on how to optimize performance, take a look at
121[Best Practices](../performance/best_practices).
122