• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Frequently Asked Questions
2
3If you don't find an answer to your question here, please look through our
4detailed documentation for the topic or file a
5[GitHub issue](https://github.com/tensorflow/tensorflow/issues).
6
7## Model Conversion
8
9#### What formats are supported for conversion from TensorFlow to TensorFlow Lite?
10
11The TensorFlow Lite converter supports the following formats:
12
13*   SavedModels:
14    [TFLiteConverter.from_saved_model](../convert/python_api.md#exporting_a_savedmodel_)
15*   Frozen GraphDefs generated by
16    [freeze_graph.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py):
17    [TFLiteConverter.from_frozen_graph](../convert/python_api.md#exporting_a_graphdef_from_file_)
18*   tf.keras HDF5 models:
19    [TFLiteConverter.from_keras_model_file](../convert/python_api.md#exporting_a_tfkeras_file_)
20*   tf.Session:
21    [TFLiteConverter.from_session](../convert/python_api.md#exporting_a_graphdef_from_tfsession_)
22
23The recommended approach is to integrate the
24[Python converter](../convert/python_api.md) into your model pipeline in order to
25detect compatibility issues early on.
26
27#### Why doesn't my model convert?
28
29Since the number of TensorFlow Lite operations is smaller than TensorFlow's,
30some inference models may not be able to convert. For unimplemented operations,
31take a look at the question on
32[missing operators](faq.md#why-are-some-operations-not-implemented-in-tensorflow-lite).
33Unsupported operators include embeddings and LSTM/RNNs. For conversion issues
34not related to missing operations, search our
35[GitHub issues](https://github.com/tensorflow/tensorflow/issues?q=label%3Acomp%3Alite+)
36or file a [new one](https://github.com/tensorflow/tensorflow/issues).
37
38#### How do I determine the inputs/outputs for GraphDef protocol buffer?
39
40The easiest way to inspect a graph from a `.pb` file is to use the
41[summarize_graph](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/README.md#inspecting-graphs)
42tool.
43
44If that approach yields an error, you can visualize the GraphDef with
45[TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard) and
46look for the inputs and outputs in the graph. To visualize a `.pb` file, use the
47[`import_pb_to_tensorboard.py`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/import_pb_to_tensorboard.py)
48script like below:
49
50```
51python import_pb_to_tensorboard.py --model_dir <model path> --log_dir <log dir path>
52```
53
54#### How do I inspect a `.tflite` file?
55
56TensorFlow Lite models can be visualized using the
57[visualize.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/visualize.py)
58script in our repository.
59
60*   [Clone the TensorFlow repository](https://www.tensorflow.org/install/source)
61*   Run the `visualize.py` script with bazel:
62
63```
64bazel run //tensorflow/lite/tools:visualize model.tflite visualized_model.html
65```
66
67## Models & Operations
68
69#### Why are some operations not implemented in TensorFlow Lite?
70
71In order to keep TensorFlow Lite lightweight, only certain operations were used
72in the converter. The [Compatibility Guide](ops_compatibility.md) provides a
73list of operations currently supported by TensorFlow Lite.
74
75If you don’t see a specific operation (or an equivalent) listed, it's likely
76that it has not been prioritized. The team tracks requests for new operations on
77GitHub [issue #21526](https://github.com/tensorflow/tensorflow/issues/21526).
78Leave a comment if your request hasn’t already been mentioned.
79
80In the meanwhile, you could try implementing a
81[custom operator](ops_custom.md) or using a different model that only
82contains supported operators. If binary size is not a constraint, try using
83TensorFlow Lite with [select TensorFlow ops](ops_select.md).
84
85#### How do I test that a TensorFlow Lite model behaves the same as the original TensorFlow model?
86
87The best way to test the behavior of a TensorFlow Lite model is to use our API
88with test data and compare the outputs to TensorFlow for the same inputs. Take a
89look at our [Python Interpreter example](../convert/python_api.md) that generates
90random data to feed to the interpreter.
91
92## Optimization
93
94#### How do I reduce the size of my converted TensorFlow Lite model?
95
96[Post-training quantization](../performance/post_training_quantization.md) can be
97used during conversion to TensorFlow Lite to reduce the size of the model.
98Post-training quantization quantizes weights to 8-bits of precision from
99floating-point and dequantizes them during runtime to perform floating point
100computations. However, note that this could have some accuracy implications.
101
102If retraining the model is an option, consider
103[Quantization-aware training](https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/contrib/quantize).
104However, note that quantization-aware training is only available for a subset of
105convolutional neural network architectures.
106
107For a deeper understanding of different optimization methods, look at
108[Model optimization](../performance/model_optimization.md).
109
110#### How do I optimize TensorFlow Lite performance for my machine learning task?
111
112The high-level process to optimize TensorFlow Lite performance looks something
113like this:
114
115*   *Make sure that you have the right model for the task.* For image
116    classification, check out our [list of hosted models](hosted_models.md).
117*   *Tweak the number of threads.* Many TensorFlow Lite operators support
118    multi-threaded kernels. You can use `SetNumThreads()` in the
119    [C++ API](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/interpreter.h#L345)
120    to do this. However, increasing threads results in performance variability
121    depending on the environment.
122*   *Use Hardware Accelerators.* TensorFlow Lite supports model acceleration for
123    specific hardware using delegates. For example, to use Android’s Neural
124    Networks API, call
125    [`UseNNAPI`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/interpreter.h#L343)
126    on the interpreter. Or take a look at our
127    [GPU delegate tutorial](../performance/gpu.md).
128*   *(Advanced) Profile Model.* The Tensorflow Lite
129    [benchmarking tool](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark)
130    has a built-in profiler that can show per-operator statistics. If you know
131    how you can optimize an operator’s performance for your specific platform,
132    you can implement a [custom operator](ops_custom.md).
133
134For a more in-depth discussion on how to optimize performance, take a look at
135[Best Practices](../performance/best_practices.md).
136