• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Select TensorFlow operators to use in TensorFlow Lite
2
3Caution: This feature is experimental.
4
5The TensorFlow Lite builtin op library has grown rapidly, and will continue to
6grow, but there remains a long tail of TensorFlow ops that are not yet natively
7supported by TensorFlow Lite . These unsupported ops can be a point of friction
8in the TensorFlow Lite model conversion process. To that end, the team has
9recently been working on an experimental mechanism for reducing this friction.
10
11This document outlines how to use TensorFlow Lite with select TensorFlow ops.
12*Note that this feature is experimental and is under active development.* As you
13use this feature, keep in mind the [known limitations](#known-limitations), and
14please send feedback about models that work and issues you are facing to
15tflite@tensorflow.org.
16
17TensorFlow Lite will continue to have
18[TensorFlow Lite builtin ops](ops_compatibility.md) optimized for mobile and
19embedded devices. However, TensorFlow Lite models can now use a subset of
20TensorFlow ops when TFLite builtin ops are not sufficient.
21
22Models converted with TensorFlow ops will require a TensorFlow Lite interpreter
23that has a larger binary size than the interpreter with only TFLite builtin ops.
24Additionally, performance optimizations will not be available for any TensorFlow
25ops in the TensorFlow Lite model.
26
27This document outlines how to [convert](#converting-the-model) and
28[run](#running-the-model) a TFLite model with TensorFlow ops on your platform of
29choice. It also discusses some [known limitations](#known-limitations), the
30[future plans](#future-plans) for this feature, and basic
31[performance and size metrics](#metrics).
32
33## Converting the model
34
35To convert a TensorFlow model to a TensorFlow Lite model with TensorFlow ops,
36use the `target_ops` argument in the
37[TensorFlow Lite converter](../convert/). The
38following values are valid options for `target_ops`:
39
40*   `TFLITE_BUILTINS` - Converts models using TensorFlow Lite builtin ops.
41*   `SELECT_TF_OPS` - Converts models using TensorFlow ops. The exact subset of
42    supported ops can be found in the whitelist at
43    `lite/toco/tflite/whitelisted_flex_ops.cc`.
44
45The recommended approach is to convert the model with `TFLITE_BUILTINS`, then
46with both `TFLITE_BUILTINS,SELECT_TF_OPS`, and finally with only
47`SELECT_TF_OPS`. Using both options (i.e. `TFLITE_BUILTINS,SELECT_TF_OPS`)
48creates models with TensorFlow Lite ops where possible. Using only
49`SELECT_TF_OPS` is useful when the model contains TensorFlow ops that are only
50partially supported by TensorFlow Lite, and one would like to avoid those
51limitations.
52
53The following example shows how to use `target_ops` in the
54[`TFLiteConverter`](./convert/python_api.md) Python API.
55
56```
57import tensorflow as tf
58
59converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
60converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
61                        tf.lite.OpsSet.SELECT_TF_OPS]
62tflite_model = converter.convert()
63open("converted_model.tflite", "wb").write(tflite_model)
64```
65
66The following example shows how to use `target_ops` in the
67[`tflite_convert`](../convert/cmdline_examples.md)
68command line tool.
69
70```
71tflite_convert \
72  --output_file=/tmp/foo.tflite \
73  --graph_def_file=/tmp/foo.pb \
74  --input_arrays=input \
75  --output_arrays=MobilenetV1/Predictions/Reshape_1 \
76  --target_ops=TFLITE_BUILTINS,SELECT_TF_OPS
77```
78
79When building and running `tflite_convert` directly with `bazel`, please pass
80`--define=with_select_tf_ops=true` as an additional argument.
81
82```
83bazel run --define=with_select_tf_ops=true tflite_convert -- \
84  --output_file=/tmp/foo.tflite \
85  --graph_def_file=/tmp/foo.pb \
86  --input_arrays=input \
87  --output_arrays=MobilenetV1/Predictions/Reshape_1 \
88  --target_ops=TFLITE_BUILTINS,SELECT_TF_OPS
89```
90
91## Running the model
92
93When using a TensorFlow Lite model that has been converted with support for
94select TensorFlow ops, the client must also use a TensorFlow Lite runtime that
95includes the necessary library of TensorFlow ops.
96
97### Android AAR
98
99A new Android AAR target with select TensorFlow ops has been added for
100convenience. Assuming a <a href="android.md">working TensorFlow Lite
101build environment</a>, build the Android AAR with select TensorFlow ops as
102follows:
103
104```sh
105bazel build --cxxopt='--std=c++11' -c opt             \
106  --config=android_arm --config=monolithic          \
107  //tensorflow/lite/java:tensorflow-lite-with-select-tf-ops
108```
109
110This will generate an AAR file in `bazel-genfiles/tensorflow/lite/java/`. From
111there, you can either import the AAR directly into your project, or publish the
112custom AAR to your local Maven repository:
113
114```sh
115mvn install:install-file \
116  -Dfile=bazel-genfiles/tensorflow/lite/java/tensorflow-lite-with-select-tf-ops.aar \
117  -DgroupId=org.tensorflow \
118  -DartifactId=tensorflow-lite-with-select-tf-ops -Dversion=0.1.100 -Dpackaging=aar
119```
120
121Finally, in your app's `build.gradle`, ensure you have the `mavenLocal()`
122dependency and replace the standard TensorFlow Lite dependency with the one that
123has support for select TensorFlow ops:
124
125```
126allprojects {
127    repositories {
128        jcenter()
129        mavenLocal()
130    }
131}
132
133dependencies {
134    implementation 'org.tensorflow:tensorflow-lite-with-select-tf-ops:0.1.100'
135}
136```
137
138### iOS
139
140With XCode Command Line Tools installed, TensorFlow Lite with select TensorFlow
141ops support can be built with the following command:
142
143```sh
144tensorflow/contrib/makefile/build_all_ios_with_tflite.sh
145```
146
147This will generate the required static linking libraries in the
148`tensorflow/contrib/makefile/gen/lib/` directory.
149
150The TensorFlow Lite camera example app can be used to test this. A new
151TensorFlow Lite XCode project with support for select TensorFlow ops has been
152added to
153`tensorflow/lite/examples/ios/camera/tflite_camera_example_with_select_tf_ops.xcodeproj`.
154
155To use this feature in your own project, either clone the example project or set
156the project settings for a new or existing project to the following:
157
158*   In Build Phases -> Link Binary With Libraries, add the static libraries
159    under `tensorflow/contrib/makefile/gen/lib/` directory:
160    *   `libtensorflow-lite.a`
161    *   `libprotobuf.a`
162    *   `nsync.a`
163*   In Build Settings -> Header Search Paths, add the following directories:
164    *   `tensorflow/lite/`
165    *   `tensorflow/contrib/makefile/downloads/flatbuffer/include`
166    *   `tensorflow/contrib/makefile/downloads/eigen`
167*   In Build Settings -> Other Linker Flags, add `-force_load
168    tensorflow/contrib/makefile/gen/lib/libtensorflow-lite.a`.
169
170A CocoaPod with support for select TensorFlow ops will also be released in the
171future.
172
173### C++
174
175When building TensorFlow Lite libraries using the bazel pipeline, the additional
176TensorFlow ops library can be included and enabled as follows:
177
178*   Enable monolithic builds if necessary by adding the `--config=monolithic`
179    build flag.
180*   Do one of the following:
181    *   Include the `--define=with_select_tf_ops=true` build flag in the `bazel
182        build` invocation when building TensorFlow Lite.
183    *   Add the TensorFlow ops delegate library dependency to the build
184        dependencies: `tensorflow/lite/delegates/flex:delegate`.
185
186Note that the necessary `TfLiteDelegate` will be installed automatically when
187creating the interpreter at runtime as long as the delegate is linked into the
188client library. It is not necessary to explicitly install the delegate instance
189as is typically required with other delegate types.
190
191### Python pip Package
192
193Python support is actively under development.
194
195## Metrics
196
197### Performance
198
199When using a mixture of both builtin and select TensorFlow ops, all of the same
200TensorFlow Lite optimizations and optimized builtin kernels will be be available
201and usable with the converted model.
202
203The following table describes the average time taken to run inference on
204MobileNet on a Pixel 2. The listed times are an average of 100 runs. These
205targets were built for Android using the flags: `--config=android_arm64 -c opt`.
206
207Build                                | Time (milliseconds)
208------------------------------------ | -------------------
209Only built-in ops (`TFLITE_BUILTIN`) | 260.7
210Using only TF ops (`SELECT_TF_OPS`)  | 264.5
211
212### Binary Size
213
214The following table describes the binary size of TensorFlow Lite for each build.
215These targets were built for Android using `--config=android_arm -c opt`.
216
217Build                 | C++ Binary Size | Android APK Size
218--------------------- | --------------- | ----------------
219Only built-in ops     | 796 KB          | 561 KB
220Built-in ops + TF ops | 23.0 MB         | 8.0 MB
221
222## Known Limitations
223
224The following is a list of some of the known limitations:
225
226*   Control flow ops are not yet supported.
227*   The
228    [`post_training_quantization`](https://www.tensorflow.org/performance/post_training_quantization)
229    flag is currently not supported for TensorFlow ops so it will not quantize
230    weights for any TensorFlow ops. In models with both TensorFlow Lite builtin
231    ops and TensorFlow ops, the weights for the builtin ops will be quantized.
232*   Ops that require explicit initialization from resources, like HashTableV2,
233    are not yet supported.
234*   Certain TensorFlow ops may not support the full set of input/output types
235    that are typically available on stock TensorFlow.
236
237## Future Plans
238
239The following is a list of improvements to this pipeline that are in progress:
240
241*   *Selective registration* - There is work being done to make it simple to
242    generate TFLite interpreter binaries that only contain the TensorFlow ops
243    required for a particular set of models.
244*   *Improved usability* - The conversion process will be simplified to only
245    require a single pass through the converter. Additionally, pre-built Android
246    AAR and iOS CocoaPod binaries will be provided.
247*   *Improved performance* - There is work being done to ensure TensorFlow Lite
248    with TensorFlow ops has performance parity to TensorFlow Mobile.
249