• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Adding metadata to TensorFlow Lite models
2
3TensorFlow Lite metadata provides a standard for model descriptions. The
4metadata is an important source of knowledge about what the model does and its
5input / output information. The metadata consists of both
6
7*   human readable parts which convey the best practice when using the model,
8    and
9*   machine readable parts that can be leveraged by code generators, such as the
10    [TensorFlow Lite Android code generator](../inference_with_metadata/codegen.md#generate-code-with-tensorflow-lite-android-code-generator)
11    and the
12    [Android Studio ML Binding feature](../inference_with_metadata/codegen.md#generate-code-with-android-studio-ml-model-binding).
13
14All image models published on
15[TensorFlow Lite hosted models](https://www.tensorflow.org/lite/guide/hosted_models)
16and [TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite) have been
17populated with metadata.
18
19## Setup the metadata tools
20
21Before adding metadata to your model, you will need to a Python programming
22environment setup for running TensorFlow. There is a detailed guide on how to
23set this up [here](https://www.tensorflow.org/install).
24
25After setup the Python programming environment, you will need to install
26additional tooling:
27
28```sh
29pip install tflite-support
30```
31
32TensorFlow Lite metadata tooling supports both Python 2 and Python 3.
33
34## Adding metadata
35
36There are three parts to the model metadata in the
37[schema](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/metadata_schema.fbs):
38
391.  **Model information** - Overall description of the model as well as items
40    such as license terms. See
41    [ModelMetadata](https://github.com/tensorflow/tflite-support/blob/4cd0551658b6e26030e0ba7fc4d3127152e0d4ae/tensorflow_lite_support/metadata/metadata_schema.fbs#L640).
422.  **Input information** - Description of the inputs and pre-processing
43    required such as normalization. See
44    [SubGraphMetadata.input_tensor_metadata](https://github.com/tensorflow/tflite-support/blob/4cd0551658b6e26030e0ba7fc4d3127152e0d4ae/tensorflow_lite_support/metadata/metadata_schema.fbs#L590).
453.  **Output information** - Description of the output and post-processing
46    required such as mapping to labels. See
47    [SubGraphMetadata.output_tensor_metadata](https://github.com/tensorflow/tflite-support/blob/4cd0551658b6e26030e0ba7fc4d3127152e0d4ae/tensorflow_lite_support/metadata/metadata_schema.fbs#L599).
48
49Since TensorFlow Lite only supports single subgraph at this point, the
50[TensorFlow Lite code generator](../inference_with_metadata/codegen.md#generate-code-with-tensorflow-lite-android-code-generator)
51and the
52[Android Studio ML Binding feature](../inference_with_metadata/codegen.md#generate-code-with-android-studio-ml-model-binding)
53will use `ModelMetadata.name` and `ModelMetadata.description`, instead of
54`SubGraphMetadata.name` and `SubGraphMetadata.description`, when displaying
55metadata and generating code.
56
57### Supported Input / Output types
58
59TensorFlow Lite metadata for input and output are not designed with specific
60model types in mind but rather input and output types. It does not matter what
61the model functionally does, as long as the input and output types consists of
62the following or a combination of the following, it is supported by TensorFlow
63Lite metadata:
64
65*   Feature - Numbers which are unsigned integers or float32.
66*   Image - Metadata currently supports RGB and greyscale images.
67*   Bounding box - Rectangular shape bounding boxes. The schema supports
68    [a variety of numbering schemes](https://github.com/tensorflow/tflite-support/blob/4cd0551658b6e26030e0ba7fc4d3127152e0d4ae/tensorflow_lite_support/metadata/metadata_schema.fbs#L214).
69
70### Pack the associated files
71
72TensorFlow Lite models may come with different associated files. For example,
73natural language models usually have vocab files that map word pieces to word
74IDs; classification models may have label files that indicate object categories.
75Without the associated files (if there are), a model will not function well.
76
77The associated files can now be bundled with the model through the metadata
78Python library. The new TensorFlow Lite model becomes a zip file that contains
79both the model and the associated files. It can be unpacked with common zip
80tools. This new model format keeps using the same file extension, `.tflite`. It
81is compatible with existing TFLite framework and Interpreter. See
82[Pack mtadata and associated files into the model](#pack-metadata-and-associated-files-into-the-model)
83for more details.
84
85The associated file information can be recorded in the metadata. Depending on
86the file type and where the file is attached to (i.e. `ModelMetadata`,
87`SubGraphMetadata`, and `TensorMetadata`),
88[the TensorFlow Lite Android code generator](../inference_with_metadata/codegen.md)
89may apply corresponding pre/post processing automatically to the object. See
90[the \<Codegen usage\> section of each associate file type](https://github.com/tensorflow/tflite-support/blob/4cd0551658b6e26030e0ba7fc4d3127152e0d4ae/tensorflow_lite_support/metadata/metadata_schema.fbs#L77-L127)
91in the schema for more details.
92
93### Normalization and quantization parameters
94
95Normalization is a common data preprocessing technique in machine learning. The
96goal of normalization is to change the values to a common scale, without
97distorting differences in the ranges of values.
98
99[Model quantization](https://www.tensorflow.org/lite/performance/model_optimization#model_quantization)
100is a technique that allows for reduced precision representations of weights and
101optionally, activations for both storage and computation.
102
103In terms of preprocessing and post-processing, normalization and quantization
104are two independent steps. Here are the details.
105
106|                         | Normalization           | Quantization             |
107| :---------------------: | ----------------------- | ------------------------ |
108| \                       | **Float model**: \      | **Float model**: \       |
109: An example of the       : - mean\: 127.5 \        : - zeroPoint\: 0 \        :
110: parameter values of the : - std\: 127.5 \         : - scale\: 1.0 \          :
111: input image in          : **Quant model**\: \     : **Quant model**\: \      :
112: MobileNet for float and : - mean\: 127.5 \        : - zeroPoint\: 128.0 \    :
113: quant models,           : - std\: 127.5           : - scale\:0.0078125f \    :
114: respectively.           :                         :                          :
115| \                       | \                       | **Float models** does    |
116: \                       : \                       : not need quantization. \ :
117: \                       : **Inputs**\: If input   : **Quantized model** may  :
118: \                       : data is normalized in   : or may not need          :
119: When to invoke?         : training, the input     : quantization in pre/post :
120:                         : data of inference needs : processing. It depends   :
121:                         : to be normalized        : on the datatype of       :
122:                         : accordingly. \          : input/output tensors. \  :
123:                         : **Outputs**\: output    : - float tensors\: no     :
124:                         : data will not be        : quantization in pre/post :
125:                         : normalized in general.  : processing needed. Quant :
126:                         :                         : op and dequant op are    :
127:                         :                         : baked into the model     :
128:                         :                         : graph. \                 :
129:                         :                         : - int8/uint8 tensors\:   :
130:                         :                         : need quantization in     :
131:                         :                         : pre/post processing.     :
132| \                       | \                       | **Quantize for inputs**: |
133: \                       : \                       : \                        :
134: Formula                 : normalized_input =      : q = f / scale +          :
135:                         : (input - mean) / std    : zeroPoint \              :
136:                         :                         : **Dequantize for         :
137:                         :                         : outputs**\: \            :
138:                         :                         : f = (q - zeroPoint) *    :
139:                         :                         : scale                    :
140| \                       | Filled by model creator | Filled automatically by  |
141: Where are the           : and stored in model     : TFLite converter, and    :
142: parameters              : metadata, as            : stored in tflite model   :
143:                         : `NormalizationOptions`  : file.                    :
144| How to get the          | Through the             | Through the TFLite       |
145: parameters?             : `MetadataExtractor` API : `Tensor` API [1] or      :
146:                         : [2]                     : through the              :
147:                         :                         : `MetadataExtractor` API  :
148:                         :                         : [2]                      :
149| Do float and quant      | Yes, float and quant    | No, the float model does |
150: models share the same   : models have the same    : not need quantization.   :
151: value?                  : Normalization           :                          :
152:                         : parameters              :                          :
153| Does TFLite Code        | \                       | \                        |
154: generator or Android    : Yes                     : Yes                      :
155: Studio ML binding       :                         :                          :
156: automatically generate  :                         :                          :
157: it in data processing?  :                         :                          :
158
159[1] The
160[TensorFlow Lite Java API](https://github.com/tensorflow/tensorflow/blob/09ec15539eece57b257ce9074918282d88523d56/tensorflow/lite/java/src/main/java/org/tensorflow/lite/Tensor.java#L73)
161and the
162[TensorFlow Lite C++ API](https://github.com/tensorflow/tensorflow/blob/09ec15539eece57b257ce9074918282d88523d56/tensorflow/lite/c/common.h#L391).
163\
164[2] The [metadata extractor library](#read-the-metadata-from-models)
165
166When processing image data for uint8 models, normalization and quantization are
167sometimes skipped. It is fine to do so when the pixel values are in the range of
168[0, 255]. But in general, you should always process the data according to the
169normalization and quantization parameters when applicable.
170
171### Examples
172
173Note: The export directory specified has to exist before you run the script; it
174does not get created as part of the process.
175
176You can find examples on how the metadata should be populated for different
177types of models here:
178
179#### Image classification
180
181Download the script
182[here](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/metadata/metadata_writer_for_image_classifier.py)
183, which populates metadata to
184[mobilenet_v1_0.75_160_quantized.tflite](https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.75_160_quantized/1/default/1).
185Run the script like this:
186
187```sh
188python ./metadata_writer_for_image_classifier.py \
189    --model_file=./model_without_metadata/mobilenet_v1_0.75_160_quantized.tflite \
190    --label_file=./model_without_metadata/labels.txt \
191    --export_directory=model_with_metadata
192```
193
194To populate metadata for other image classification models, add the model specs
195like
196[this](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/metadata/metadata_writer_for_image_classifier.py#L63-L74)
197into the script. The rest of this guide will highlight some of the key sections
198in the image classification example to illustrate the key elements.
199
200### Deep dive into the image classification example
201
202#### Model information
203
204Metadata starts by creating a new model info:
205
206```python
207from tflite_support import flatbuffers
208from tflite_support import metadata as _metadata
209from tflite_support import metadata_schema_py_generated as _metadata_fb
210
211""" ... """
212"""Creates the metadata for an image classifier."""
213
214# Creates model info.
215model_meta = _metadata_fb.ModelMetadataT()
216model_meta.name = "MobileNetV1 image classifier"
217model_meta.description = ("Identify the most prominent object in the "
218                          "image from a set of 1,001 categories such as "
219                          "trees, animals, food, vehicles, person etc.")
220model_meta.version = "v1"
221model_meta.author = "TensorFlow"
222model_meta.license = ("Apache License. Version 2.0 "
223                      "http://www.apache.org/licenses/LICENSE-2.0.")
224```
225
226#### Input / output information
227
228This section shows you how to describe your model's input and output signature.
229This metadata may be used by automatic code generators to create pre- and post-
230processing code. To create input or output information about a tensor:
231
232```python
233# Creates input info.
234input_meta = _metadata_fb.TensorMetadataT()
235
236# Creates output info.
237output_meta = _metadata_fb.TensorMetadataT()
238```
239
240#### Image input
241
242Image is a common input type for machine learning. TensorFlow Lite metadata
243supports information such as colorspace and pre-processing information such as
244normalization. The dimension of the image does not require manual specification
245since it is already provided by the shape of the input tensor and can be
246automatically inferred.
247
248```python
249input_meta.name = "image"
250input_meta.description = (
251    "Input image to be classified. The expected image is {0} x {1}, with "
252    "three channels (red, blue, and green) per pixel. Each value in the "
253    "tensor is a single byte between 0 and 255.".format(160, 160))
254input_meta.content = _metadata_fb.ContentT()
255input_meta.content.contentProperties = _metadata_fb.ImagePropertiesT()
256input_meta.content.contentProperties.colorSpace = (
257    _metadata_fb.ColorSpaceType.RGB)
258input_meta.content.contentPropertiesType = (
259    _metadata_fb.ContentProperties.ImageProperties)
260input_normalization = _metadata_fb.ProcessUnitT()
261input_normalization.optionsType = (
262    _metadata_fb.ProcessUnitOptions.NormalizationOptions)
263input_normalization.options = _metadata_fb.NormalizationOptionsT()
264input_normalization.options.mean = [127.5]
265input_normalization.options.std = [127.5]
266input_meta.processUnits = [input_normalization]
267input_stats = _metadata_fb.StatsT()
268input_stats.max = [255]
269input_stats.min = [0]
270input_meta.stats = input_stats
271```
272
273#### Label output
274
275Label can be mapped to an output tensor via an associated file using
276`TENSOR_AXIS_LABELS`.
277
278```python
279# Creates output info.
280output_meta = _metadata_fb.TensorMetadataT()
281output_meta.name = "probability"
282output_meta.description = "Probabilities of the 1001 labels respectively."
283output_meta.content = _metadata_fb.ContentT()
284output_meta.content.content_properties = _metadata_fb.FeaturePropertiesT()
285output_meta.content.contentPropertiesType = (
286    _metadata_fb.ContentProperties.FeatureProperties)
287output_stats = _metadata_fb.StatsT()
288output_stats.max = [1.0]
289output_stats.min = [0.0]
290output_meta.stats = output_stats
291label_file = _metadata_fb.AssociatedFileT()
292label_file.name = os.path.basename("your_path_to_label_file")
293label_file.description = "Labels for objects that the model can recognize."
294label_file.type = _metadata_fb.AssociatedFileType.TENSOR_AXIS_LABELS
295output_meta.associatedFiles = [label_file]
296```
297
298#### Create the metadata Flatbuffers
299
300The following code combines the model information with the input and output
301information:
302
303```python
304# Creates subgraph info.
305subgraph = _metadata_fb.SubGraphMetadataT()
306subgraph.inputTensorMetadata = [input_meta]
307subgraph.outputTensorMetadata = [output_meta]
308model_meta.subgraphMetadata = [subgraph]
309
310b = flatbuffers.Builder(0)
311b.Finish(
312    model_meta.Pack(b),
313    _metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER)
314metadata_buf = b.Output()
315```
316
317#### Pack metadata and associated files into the model
318
319Once the metadata Flatbuffers is created, the metadata and the label file are
320written into the TFLite file via the `populate` method:
321
322```python
323populator = _metadata.MetadataPopulator.with_model_file(model_file)
324populator.load_metadata_buffer(metadata_buf)
325populator.load_associated_files(["your_path_to_label_file"])
326populator.populate()
327```
328
329You can pack as many associated files as you want into the model through
330`load_associated_files`. However, it is required to pack at least those files
331documented in the metadata. In this example, packing the label file is
332mandatory.
333
334## Visualize the metadata
335
336You can use [Netron](https://github.com/lutzroeder/netron) to visualize your
337metadata, or you can read the metadata from a TensorFlow Lite model into a json
338format using the `MetadataDisplayer`:
339
340```python
341displayer = _metadata.MetadataDisplayer.with_model_file(export_model_path)
342export_json_file = os.path.join(FLAGS.export_directory,
343                    os.path.splitext(model_basename)[0] + ".json")
344json_file = displayer.get_metadata_json()
345# Optional: write out the metadata as a json file
346with open(export_json_file, "w") as f:
347  f.write(json_file)
348```
349
350Android Studio also supports displaying metadata through the
351[Android Studio ML Binding feature](https://developer.android.com/studio/preview/features#tensor-flow-lite-models).
352
353## Metadata versioning
354
355The
356[metadata schema](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/metadata_schema.fbs)
357is versioned both by the Semantic versioning number, which tracks the changes of
358the schema file, and by the Flatbuffers file identification, which indicates the
359true version compatibility.
360
361### The Semantic versioning number
362
363The metadata schema is versioned by the
364[Semantic versioning number](https://github.com/tensorflow/tflite-support/blob/4cd0551658b6e26030e0ba7fc4d3127152e0d4ae/tensorflow_lite_support/metadata/metadata_schema.fbs#L53),
365such as MAJOR.MINOR.PATCH. It tracks schema changes according to the rules
366[here](https://github.com/tensorflow/tflite-support/blob/4cd0551658b6e26030e0ba7fc4d3127152e0d4ae/tensorflow_lite_support/metadata/metadata_schema.fbs#L32-L44).
367See the
368[history of fields](https://github.com/tensorflow/tflite-support/blob/4cd0551658b6e26030e0ba7fc4d3127152e0d4ae/tensorflow_lite_support/metadata/metadata_schema.fbs#L63)
369added after version `1.0.0`.
370
371### The Flatbuffers file identification
372
373Semantic versioning guarantees the compatibility if following the rules, but it
374does not imply the true incompatibility. When bumping up the MAJOR number, it
375does not necessarily mean the backwards compatibility is broken. Therefore, we
376use the
377[Flatbuffers file identification](https://google.github.io/flatbuffers/md__schemas.html),
378[file_identifier](https://github.com/tensorflow/tflite-support/blob/4cd0551658b6e26030e0ba7fc4d3127152e0d4ae/tensorflow_lite_support/metadata/metadata_schema.fbs#L61),
379to denote the true compatibility of the metadata schema. The file identifier is
380exactly 4 characters long. It is fixed to a certain metadata schema and not
381subject to change by users. If the backward compatibility of the metadata schema
382has to be broken for some reason, the file_identifier will bump up, for example,
383from “M001” to “M002”. File_identifier is expected to be changed much less
384frequently than the metadata_version.
385
386### The minimum necessary metadata parser version
387
388The
389[minimum necessary metadata parser version](https://github.com/tensorflow/tflite-support/blob/4cd0551658b6e26030e0ba7fc4d3127152e0d4ae/tensorflow_lite_support/metadata/metadata_schema.fbs#L681)
390is the minimum version of metadata parser (the Flatbuffers generated code) that
391can read the metadata Flatbuffers in full. The version is effectively the
392largest version number among the versions of all the fields populated and the
393smallest compatible version indicated by the file identifier. The minimum
394necessary metadata parser version is automatically populated by the
395`MetadataPopulator` when the metadata is populated into a TFLite model. See the
396[metadata extractor](#read-the-metadata-from-models) for more information on how
397the minimum necessary metadata parser version is used.
398
399## Read the metadata from models
400
401The Metadata Extractor library is convenient tool to read the metadata and
402associated files from a models across different platforms (see the
403[Java version](https://github.com/tensorflow/tflite-support/tree/master/tensorflow_lite_support/metadata/java)
404and the
405[C++ version](https://github.com/tensorflow/tflite-support/tree/master/tensorflow_lite_support/metadata/cc)).
406You can build your own metadata extractor tool in other languages using the
407Flatbuffers library.
408
409### Read the metadata in Java
410
411To use the Metadata Extractor library in your Android app, we recommend using
412the
413[TensorFlow Lite Metadata AAR hosted at JCenter](https://bintray.com/google/tensorflow/tensorflow-lite-metadata).
414It contains the `MetadataExtractor` class, as well as the FlatBuffers Java
415bindings for the
416[metadata schema](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/metadata_schema.fbs)
417and the
418[model schema](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs).
419
420You can specify this in your `build.gradle` dependencies as follows:
421
422```build
423dependencies {
424    implementation 'org.tensorflow:tensorflow-lite-metadata:0.1.0'
425}
426```
427
428You can initialize a `MetadataExtractor` object with a `ByteBuffer` that points
429to the model:
430
431```java
432public MetadataExtractor(ByteBuffer buffer);
433```
434
435The `ByteBuffer` must remain unchanged for the entire lifetime of the
436`MetadataExtractor` object. The initialization may fail if the Flatbuffers file
437identifier of the model metadata does not match that of the metadata parser. See
438[metadata versioning](#metadata-versioning) for more information.
439
440With matching file identifiers, the metadata extractor will successfully read
441metadata generated from all past and future schema due to the Flatbuffers'
442forwards and backwards compatibility mechanism. However, fields from future
443schemas cannot be extracted by older metadata extractors. The
444[minimum necessary parser version](#the-minimum-necessary-metadata-parser-version)
445of the metadata indicates the minimum version of metadata parser that can read
446the metadata Flatbuffers in full. You can use the following method to verify if
447the minimum necessary parser version condition is met:
448
449```java
450public final boolean isMinimumParserVersionSatisfied();
451```
452
453Passing in a model without metadata is allowed. However, invoking methods that
454read from the metadata will cause runtime errors. You can check if a model has
455metadata by invoking the `hasMetadata` method:
456
457```java
458public boolean hasMetadata();
459```
460
461`MetadataExtractor` provides convenient functions for you to get the
462input/output tensors' metadata. For example,
463
464```java
465public int getInputTensorCount();
466public TensorMetadata getInputTensorMetadata(int inputIndex);
467public QuantizationParams getInputTensorQuantizationParams(int inputIndex);
468public int[] getInputTensorShape(int inputIndex);
469public int getoutputTensorCount();
470public TensorMetadata getoutputTensorMetadata(int inputIndex);
471public QuantizationParams getoutputTensorQuantizationParams(int inputIndex);
472public int[] getoutputTensorShape(int inputIndex);
473```
474
475Though the
476[TensorFlow Lite model schema](https://github.com/tensorflow/tensorflow/blob/aa7ff6aa28977826e7acae379e82da22482b2bf2/tensorflow/lite/schema/schema.fbs#L1075)
477supports multiple subgraphs, the TFLite Interpreter currently only supports a
478single subgraph. Therefore, `MetadataExtractor` omits subgraph index as an input
479argument in its methods.
480
481## Read the associated files from models
482
483The TensorFlow Lite model with metadata and associated files is essentially a
484zip file that can be unpacked with common zip tools to get the associated files.
485For example, you can unzip
486[mobilenet_v1_0.75_160_quantized](https://tfhub.dev/tensorflow/lite-model/mobilenet_v1_0.75_160_quantized/1/metadata/1)
487and extract the label file in the model as follows:
488
489```sh
490$ unzip mobilenet_v1_0.75_160_quantized_1_metadata_1.tflite
491Archive:  mobilenet_v1_0.75_160_quantized_1_metadata_1.tflite
492 extracting: labels.txt
493```
494
495You can also read associated files through the Metadata Extractor library.
496
497In Java, pass the file name into the `MetadataExtractor.getAssociatedFile`
498method:
499
500```java
501public InputStream getAssociatedFile(String fileName);
502```
503
504Similarily, in C++, this can be done with the method,
505`ModelMetadataExtractor::GetAssociatedFile`:
506
507```c++
508tflite::support::StatusOr<absl::string_view> GetAssociatedFile(
509      const std::string& filename) const;
510```
511