• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Generate model interfaces using metadata
2
3Using [TensorFlow Lite Metadata](../models/convert/metadata), developers can generate
4wrapper code to enable integration on Android. For most developers, the
5graphical interface of [Android Studio ML Model Binding](#mlbinding) is the
6easiest to use. If you require more customisation or are using command line
7tooling, the [TensorFlow Lite Codegen](#codegen) is also available.
8
9## Use Android Studio ML Model Binding {:#mlbinding}
10
11For TensorFlow Lite models enhanced with [metadata](../models/convert/metadata.md),
12developers can use Android Studio ML Model Binding to automatically configure
13settings for the project and generate wrapper classes based on the model
14metadata. The wrapper code removes the need to interact directly with
15`ByteBuffer`. Instead, developers can interact with the TensorFlow Lite model
16with typed objects such as `Bitmap` and `Rect`.
17
18Note: Required [Android Studio 4.1](https://developer.android.com/studio) or
19above
20
21### Import a TensorFlow Lite model in Android Studio
22
231.  Right-click on the module you would like to use the TFLite model or click on
24    `File`, then `New` > `Other` > `TensorFlow Lite Model`
25    ![Right-click menus to access the TensorFlow Lite import functionality](../images/android/right_click_menu.png)
26
271.  Select the location of your TFLite file. Note that the tooling will
28    configure the module's dependency on your behalf with ML Model binding and
29    all dependencies automatically inserted into your Android module's
30    `build.gradle` file.
31
32    Optional: Select the second checkbox for importing TensorFlow GPU if you
33    want to use GPU acceleration.
34    ![Import dialog for TFLite model](../images/android/import_dialog.png)
35
361.  Click `Finish`.
37
381.  The following screen will appear after the import is successful. To start
39    using the model, select Kotlin or Java, copy and paste the code under the
40    `Sample Code` section. You can get back to this screen by double clicking
41    the TFLite model under the `ml` directory in Android Studio.
42    ![Model details page in Android Studio](../images/android/model_details.png)
43
44### Accelerating model inference {:#acceleration}
45
46ML Model Binding provides a way for developers to accelerate their code through
47the use of delegates and the number of threads.
48
49Note: The TensorFlow Lite Interpreter must be created on the same thread as when
50is run. Otherwise, TfLiteGpuDelegate Invoke: GpuDelegate must run on the same
51thread where it was initialized. may occur.
52
53Step 1. Check the module `build.gradle` file that it contains the following
54dependency:
55
56```java
57    dependencies {
58        ...
59        // TFLite GPU delegate 2.3.0 or above is required.
60        implementation 'org.tensorflow:tensorflow-lite-gpu:2.3.0'
61    }
62```
63
64Step 2. Detect if GPU running on the device is compatible with TensorFlow GPU
65delegate, if not run the model using multiple CPU threads:
66
67<div>
68    <devsite-selector>
69    <section>
70      <h3>Kotlin</h3>
71      <p><pre class="prettyprint lang-kotlin">
72    import org.tensorflow.lite.gpu.CompatibilityList
73    import org.tensorflow.lite.gpu.GpuDelegate
74
75    val compatList = CompatibilityList()
76
77    val options = if(compatList.isDelegateSupportedOnThisDevice) {
78        // if the device has a supported GPU, add the GPU delegate
79        Model.Options.Builder().setDevice(Model.Device.GPU).build()
80    } else {
81        // if the GPU is not supported, run on 4 threads
82        Model.Options.Builder().setNumThreads(4).build()
83    }
84
85    // Initialize the model as usual feeding in the options object
86    val myModel = MyModel.newInstance(context, options)
87
88    // Run inference per sample code
89      </pre></p>
90    </section>
91    <section>
92      <h3>Java</h3>
93      <p><pre class="prettyprint lang-java">
94    import org.tensorflow.lite.support.model.Model
95    import org.tensorflow.lite.gpu.CompatibilityList;
96    import org.tensorflow.lite.gpu.GpuDelegate;
97
98    // Initialize interpreter with GPU delegate
99    Model.Options options;
100    CompatibilityList compatList = CompatibilityList();
101
102    if(compatList.isDelegateSupportedOnThisDevice()){
103        // if the device has a supported GPU, add the GPU delegate
104        options = Model.Options.Builder().setDevice(Model.Device.GPU).build();
105    } else {
106        // if the GPU is not supported, run on 4 threads
107        options = Model.Options.Builder().setNumThreads(4).build();
108    }
109
110    MyModel myModel = new MyModel.newInstance(context, options);
111
112    // Run inference per sample code
113      </pre></p>
114    </section>
115    </devsite-selector>
116</div>
117
118## Generate model interfaces with TensorFlow Lite code generator {:#codegen}
119
120Note: TensorFlow Lite wrapper code generator currently only supports Android.
121
122For TensorFlow Lite model enhanced with [metadata](../models/convert/metadata.md),
123developers can use the TensorFlow Lite Android wrapper code generator to create
124platform specific wrapper code. The wrapper code removes the need to interact
125directly with `ByteBuffer`. Instead, developers can interact with the TensorFlow
126Lite model with typed objects such as `Bitmap` and `Rect`.
127
128The usefulness of the code generator depend on the completeness of the
129TensorFlow Lite model's metadata entry. Refer to the `<Codegen usage>` section
130under relevant fields in
131[metadata_schema.fbs](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/metadata_schema.fbs),
132to see how the codegen tool parses each field.
133
134### Generate wrapper Code
135
136You will need to install the following tooling in your terminal:
137
138```sh
139pip install tflite-support
140```
141
142Once completed, the code generator can be used using the following syntax:
143
144```sh
145tflite_codegen --model=./model_with_metadata/mobilenet_v1_0.75_160_quantized.tflite \
146    --package_name=org.tensorflow.lite.classify \
147    --model_class_name=MyClassifierModel \
148    --destination=./classify_wrapper
149```
150
151The resulting code will be located in the destination directory. If you are
152using [Google Colab](https://colab.research.google.com/) or other remote
153environment, it maybe easier to zip up the result in a zip archive and download
154it to your Android Studio project:
155
156```python
157# Zip up the generated code
158!zip -r classify_wrapper.zip classify_wrapper/
159
160# Download the archive
161from google.colab import files
162files.download('classify_wrapper.zip')
163```
164
165### Using the generated code
166
167#### Step 1: Import the generated code
168
169Unzip the generated code if necessary into a directory structure. The root of
170the generated code is assumed to be `SRC_ROOT`.
171
172Open the Android Studio project where you would like to use the TensorFlow lite
173model and import the generated module by: And File -> New -> Import Module ->
174select `SRC_ROOT`
175
176Using the above example, the directory and the module imported would be called
177`classify_wrapper`.
178
179#### Step 2: Update the app's `build.gradle` file
180
181In the app module that will be consuming the generated library module:
182
183Under the android section, add the following:
184
185```build
186aaptOptions {
187   noCompress "tflite"
188}
189```
190
191Note: starting from version 4.1 of the Android Gradle plugin, .tflite will be
192added to the noCompress list by default and the aaptOptions above is not needed
193anymore.
194
195Under the dependencies section, add the following:
196
197```build
198implementation project(":classify_wrapper")
199```
200
201#### Step 3: Using the model
202
203```java
204// 1. Initialize the model
205MyClassifierModel myImageClassifier = null;
206
207try {
208    myImageClassifier = new MyClassifierModel(this);
209} catch (IOException io){
210    // Error reading the model
211}
212
213if(null != myImageClassifier) {
214
215    // 2. Set the input with a Bitmap called inputBitmap
216    MyClassifierModel.Inputs inputs = myImageClassifier.createInputs();
217    inputs.loadImage(inputBitmap));
218
219    // 3. Run the model
220    MyClassifierModel.Outputs outputs = myImageClassifier.run(inputs);
221
222    // 4. Retrieve the result
223    Map<String, Float> labeledProbability = outputs.getProbability();
224}
225```
226
227### Accelerating model inference
228
229The generated code provides a way for developers to accelerate their code
230through the use of [delegates](../performance/delegates.md) and the number of
231threads. These can be set when initializing the model object as it takes three
232parameters:
233
234*   **`Context`**: Context from the Android Activity or Service
235*   (Optional) **`Device`**: TFLite acceleration delegate for example
236    GPUDelegate or NNAPIDelegate
237*   (Optional) **`numThreads`**: Number of threads used to run the model -
238    default is one.
239
240For example, to use a NNAPI delegate and up to three threads, you can initialize
241the model like this:
242
243```java
244try {
245    myImageClassifier = new MyClassifierModel(this, Model.Device.NNAPI, 3);
246} catch (IOException io){
247    // Error reading the model
248}
249```
250
251### Troubleshooting
252
253If you get a 'java.io.FileNotFoundException: This file can not be opened as a
254file descriptor; it is probably compressed' error, insert the following lines
255under the android section of the app module that will uses the library module:
256
257```build
258aaptOptions {
259   noCompress "tflite"
260}
261```
262
263Note: starting from version 4.1 of the Android Gradle plugin, .tflite will be
264added to the noCompress list by default and the aaptOptions above is not needed
265anymore.
266