• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# TensorFlow Lite and TensorFlow operator compatibility
2
3TensorFlow Lite supports a number of TensorFlow operations used in common
4inference models. As they are processed by the TensorFlow Lite Optimizing
5Converter, those operations may be elided or fused, before the supported
6operations are mapped to their TensorFlow Lite counterparts.
7
8Since the TensorFlow Lite builtin operator library only supports a limited
9number of TensorFlow operators, not every model is convertible. Even for
10supported operations, very specific usage patterns are sometimes expected, for
11performance reasons. We expect to expand the set of supported operations in
12future TensorFlow Lite releases.
13
14The best way to understand how to build a TensorFlow model that can be used with
15TensorFlow Lite is to carefully consider how operations are converted and
16optimized, along with the limitations imposed by this process.
17
18## Supported types
19
20Most TensorFlow Lite operations target both floating-point (`float32`) and
21quantized (`uint8`, `int8`) inference, but many ops do not yet for other types
22like `tf.float16` and strings.
23
24Apart from using different version of the operations, the other difference
25between floating-point and quantized models is the way they are converted.
26Quantized conversion requires dynamic range information for tensors. This
27requires "fake-quantization" during model training, getting range information
28via a calibration data set, or doing "on-the-fly" range estimation. See
29[quantization](../performance/model_optimization.md).
30
31## Supported operations and restrictions
32
33TensorFlow Lite supports a subset of TensorFlow operations with some
34limitations. For full list of operations and limitations see
35[TF Lite Ops page](https://www.tensorflow.org/mlir/tfl_ops).
36
37## Straight-forward conversions, constant-folding and fusing
38
39A number of TensorFlow operations can be processed by TensorFlow Lite even
40though they have no direct equivalent. This is the case for operations that can
41be simply removed from the graph (`tf.identity`), replaced by tensors
42(`tf.placeholder`), or fused into more complex operations (`tf.nn.bias_add`).
43Even some supported operations may sometimes be removed through one of these
44processes.
45
46Here is a non-exhaustive list of TensorFlow operations that are usually removed
47from the graph:
48
49*   `tf.add`
50*   `tf.check_numerics`
51*   `tf.constant`
52*   `tf.div`
53*   `tf.divide`
54*   `tf.fake_quant_with_min_max_args`
55*   `tf.fake_quant_with_min_max_vars`
56*   `tf.identity`
57*   `tf.maximum`
58*   `tf.minimum`
59*   `tf.multiply`
60*   `tf.no_op`
61*   `tf.placeholder`
62*   `tf.placeholder_with_default`
63*   `tf.realdiv`
64*   `tf.reduce_max`
65*   `tf.reduce_min`
66*   `tf.reduce_sum`
67*   `tf.rsqrt`
68*   `tf.shape`
69*   `tf.sqrt`
70*   `tf.square`
71*   `tf.subtract`
72*   `tf.tile`
73*   `tf.nn.batch_norm_with_global_normalization`
74*   `tf.nn.bias_add`
75*   `tf.nn.fused_batch_norm`
76*   `tf.nn.relu`
77*   `tf.nn.relu6`
78
79Note: Many of those operations don't have TensorFlow Lite equivalents, and the
80corresponding model will not be convertible if they can't be elided or fused.
81
82## Experimental Operations
83The following TensorFlow Lite operations are present, but not ready for custom
84models:
85
86*   `CALL`
87*   `CONCAT_EMBEDDINGS`
88*   `CUSTOM`
89*   `EMBEDDING_LOOKUP_SPARSE`
90*   `HASHTABLE_LOOKUP`
91*   `LSH_PROJECTION`
92*   `SKIP_GRAM`
93*   `SVDF`
94