• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# TensorFlow Lite and TensorFlow operator compatibility
2
3TensorFlow Lite supports a number of TensorFlow operations used in common
4inference models. As they are processed by the TensorFlow Lite Optimizing
5Converter, those operations may be elided or fused, before the supported
6operations are mapped to their TensorFlow Lite counterparts.
7
8Since the set of TensorFlow Lite operations is smaller than TensorFlow's, not
9every model is convertible. Even for supported operations, very specific usage
10patterns are sometimes expected, for performance reasons. We expect to expand
11the set of supported operations in future TensorFlow Lite releases. Additional
12ops can be included by [using select TensorFlow ops](ops_select.md), at the cost
13of binary size.
14
15The best way to understand how to build a TensorFlow model that can be used with
16TensorFlow Lite is to carefully consider how operations are converted and
17optimized, along with the limitations imposed by this process.
18
19## Supported Types
20
21Most TensorFlow Lite operations target both floating-point (float32) and
22quantized (uint8, int8) inference, but many ops do not yet for other types like
23tf.float16 and strings.
24
25Apart from using different version of the operations, the other difference
26between floating-point and quantized models lies in the way they are converted.
27Quantized conversion requires dynamic range information for tensors. This
28requires "fake-quantization" during model training, getting range information
29via a calibration data set, or doing "on-the-fly" range estimation. See
30[quantization](../performance/model_optimization.md).
31
32## Data Format and Broadcasting
33
34At the moment TensorFlow Lite supports only TensorFlow's "NHWC" format, and
35broadcasting is only support in a limited number of ops (tf.add, tf.mul, tf.sub,
36and tf.div).
37
38## Compatible Operations
39
40The following TensorFlow operations are usually mapped to their TensorFlow Lite
41counterparts:
42
43*   [tf.batch_to_space_nd](https://www.tensorflow.org/api_docs/python/tf/batch_to_space_nd) -
44    *as long as the input tensor is 4D (1 batch + 2 spatial + 1 other) and the
45    crops attribute is not used*
46*   [tf.exp](https://www.tensorflow.org/api_docs/python/tf/exp)
47*   [tf.fake_quant*](https://www.tensorflow.org/api_docs/python/tf/fake_quant_with_min_max_args)
48*   [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul) - *as long
49    as the second argument is constant and transposition is not used*
50*   [tf.nn.avg_pool](https://www.tensorflow.org/api_docs/python/tf/nn/avg_pool)
51*   [tf.nn.conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d) -
52    *as long as the filter is constant*
53*   [tf.nn.depthwise_conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/depthwise_conv2d) -
54    *as long as the filter is constant and rate is [1,1]*
55*   [tf.nn.l2_normalize](https://www.tensorflow.org/api_docs/python/tf/nn/l2_normalize) -
56    *as long as normalization is done along the last dimension*
57*   [tf.nn.local_response_normalization](https://www.tensorflow.org/api_docs/python/tf/nn/local_response_normalization)
58*   [tf.nn.log_softmax](https://www.tensorflow.org/api_docs/python/tf/nn/log_softmax) -
59    *as long as axis is not provided*
60*   [tf.nn.max_pool](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool)
61*   [tf.nn.softmax](https://www.tensorflow.org/api_docs/python/tf/nn/softmax) -
62    *as long as tensors are 2D and axis is the last dimension*
63*   [tf.nn.top_k](https://www.tensorflow.org/api_docs/python/tf/nn/top_k)
64*   [tf.one_hot](https://www.tensorflow.org/api_docs/python/tf/one_hot)
65*   [tf.pad](https://www.tensorflow.org/api_docs/python/tf/pad) - *as long as
66    mode and constant_values are not used*
67*   [tf.reduce_mean](https://www.tensorflow.org/api_docs/python/tf/reduce_mean) -
68    *as long as the reduction_indices attribute is not used*
69*   [tf.reshape](https://www.tensorflow.org/api_docs/python/tf/reshape)
70*   [tf.sigmoid](https://www.tensorflow.org/api_docs/python/tf/sigmoid)
71*   [tf.space_to_batch_nd](https://www.tensorflow.org/api_docs/python/tf/space_to_batch_nd) -
72    *as long as the input tensor is 4D (1 batch + 2 spatial + 1 other)*
73*   [tf.space_to_depth](https://www.tensorflow.org/api_docs/python/tf/space_to_depth)
74*   [tf.split](https://www.tensorflow.org/api_docs/python/tf/split) - *as long
75    as num is not provided and num_or_size_split contains number of splits as a
76    0D tensor*
77*   [tf.squeeze](https://www.tensorflow.org/api_docs/python/tf/squeeze) - *as
78    long as axis is not provided*
79*   [tf.squared_difference](https://www.tensorflow.org/versions/master/api_docs/python/tf/squared_difference)
80*   [tf.strided_slice](https://www.tensorflow.org/api_docs/python/tf/strided_slice) -
81    *as long as ellipsis_mask and new_axis_mask are not used*
82*   [tf.transpose](https://www.tensorflow.org/versions/master/api_docs/python/tf/transpose) -
83    *as long as conjugate is not used*
84
85## Straightforward Conversions, Constant-Folding and Fusing
86
87A number of TensorFlow operations can be processed by TensorFlow Lite even
88though they have no direct equivalent. This is the case for operations that can
89be simply removed from the graph (tf.identity), replaced by tensors
90(tf.placeholder), or fused into more complex operations (tf.nn.bias_add). Even
91some supported operations may sometimes be removed through one of these
92processes.
93
94Here is a non-exhaustive list of TensorFlow operations that are usually removed
95from the graph:
96
97*   [tf.add](https://www.tensorflow.org/api_docs/python/tf/add)
98*   [tf.check_numerics](https://www.tensorflow.org/api_docs/python/tf/check_numerics)
99*   [tf.constant](https://www.tensorflow.org/api_docs/python/tf/constant)
100*   [tf.div](https://www.tensorflow.org/api_docs/python/tf/div)
101*   [tf.divide](https://www.tensorflow.org/api_docs/python/tf/divide)
102*   [tf.fake_quant_with_min_max_args](https://www.tensorflow.org/api_docs/python/tf/fake_quant_with_min_max_args)
103*   [tf.fake_quant_with_min_max_vars](https://www.tensorflow.org/api_docs/python/tf/fake_quant_with_min_max_vars)
104*   [tf.identity](https://www.tensorflow.org/api_docs/python/tf/identity)
105*   [tf.maximum](https://www.tensorflow.org/api_docs/python/tf/maximum)
106*   [tf.minimum](https://www.tensorflow.org/api_docs/python/tf/minimum)
107*   [tf.multiply](https://www.tensorflow.org/api_docs/python/tf/multiply)
108*   [tf.no_op](https://www.tensorflow.org/api_docs/python/tf/no_op)
109*   [tf.placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder)
110*   [tf.placeholder_with_default](https://www.tensorflow.org/api_docs/python/tf/placeholder_with_default)
111*   [tf.realdiv](https://www.tensorflow.org/api_docs/python/tf/realdiv)
112*   [tf.reduce_max](https://www.tensorflow.org/api_docs/python/tf/reduce_max)
113*   [tf.reduce_min](https://www.tensorflow.org/api_docs/python/tf/reduce_min)
114*   [tf.reduce_sum](https://www.tensorflow.org/api_docs/python/tf/reduce_sum)
115*   [tf.rsqrt](https://www.tensorflow.org/api_docs/python/tf/rsqrt)
116*   [tf.shape](https://www.tensorflow.org/api_docs/python/tf/shape)
117*   [tf.sqrt](https://www.tensorflow.org/api_docs/python/tf/sqrt)
118*   [tf.square](https://www.tensorflow.org/api_docs/python/tf/square)
119*   [tf.subtract](https://www.tensorflow.org/api_docs/python/tf/subtract)
120*   [tf.tile](https://www.tensorflow.org/api_docs/python/tf/tile)
121*   [tf.nn.batch_norm_with_global_normalization](https://www.tensorflow.org/api_docs/python/tf/nn/batch_norm_with_global_normalization)
122*   [tf.nn.bias_add](https://www.tensorflow.org/api_docs/python/tf/nn/bias_add)
123*   [tf.nn.fused_batch_norm](https://www.tensorflow.org/api_docs/python/tf/nn/fused_batch_norm)
124*   [tf.nn.relu](https://www.tensorflow.org/api_docs/python/tf/nn/relu)
125*   [tf.nn.relu6](https://www.tensorflow.org/api_docs/python/tf/nn/relu6)
126
127Note that many of those operations don't have TensorFlow Lite equivalents and
128the corresponding model will not be convertible if they can't be elided or
129fused.
130
131## Unsupported Operations
132
133TensorFlow operation not listed above are likely unsupported. Notably, the
134following common ops are not supported at the moment:
135
136*   [tf.depth_to_space](https://www.tensorflow.org/api_docs/python/tf/depth_to_space)
137*   [tf.image.resize_bilinear](https://www.tensorflow.org/api_docs/python/tf/image/resize_bilinear)
138*   [tf.tanh](https://www.tensorflow.org/api_docs/python/tf/tanh)
139
140## TensorFlow Lite Operations
141
142The following TensorFlow Lite operations are fully supported and used in place
143of the TensorFlow operations listed above:
144
145**ABS**
146
147```
148Inputs {
149  0: a tensor
150}
151Outputs {
152  0: elementwise abs of the input
153}
154```
155
156**ADD**
157
158```
159Inputs {
160  0: a tensor
161  1: a tensor
162}
163Outputs {
164  0: elementwise sum of the input tensors
165}
166Options {
167  fused_activation_function:  NONE|RELU|RELU6
168}
169```
170
171**ADD_N**
172
173```
174Inputs {
175  0-N: any number of tensors (must have same size and shape)
176}
177Outputs {
178  0: elementwise sum of the input tensors
179}
180```
181
182**ARG_MAX**
183
184```
185Inputs {
186  0: a tensor
187  1: a tensor
188}
189Outputs {
190  0: A tensor of indices of maximum values.
191}
192```
193
194**ARG_MIN**
195
196```
197Inputs {
198  0: a tensor
199  1: a tensor
200}
201Outputs {
202  0: A tensor of indices of minimum values.
203}
204```
205
206**AVERAGE_POOL_2D**
207
208```
209Inputs {
210  0: a tensor
211}
212Outputs {
213  0: a tensor where each entry is the mean of the input values in the
214     corresponding window.
215}
216Options {
217  fused_activation_function:  NONE|RELU|RELU6
218  padding: SAME|VALID
219  stride_w,stride_h: stride of the sliding window
220  filter_width,filter_height: size of the sliding window
221}
222```
223
224**BATCH_TO_SPACE_ND**
225
226```
227Inputs {
228  0: 4D tensor
229  1: 1D tensor
230  2: 2D tensor
231}
232Outputs {
233  0: tensor rearranged using block_shape. See tf.batch_to_space_nd for
234     details.
235}
236```
237
238**CONCATENATION**
239
240```
241Inputs {
242  0-N: any number of tensors
243}
244Outputs {
245  0: concatenation of the input tensors along the given axis.
246}
247Options {
248  fused_activation_function:  NONE|RELU|RELU6
249  axis: dimension along which the concatenation is performed
250}
251```
252
253**CONV_2D**
254
255```
256Inputs {
257  0: 4D tensor
258  1: filter
259  2: bias (optional)
260}
261Outputs {
262  0: result of 2D convolution of the input tensor
263}
264Options {
265  fused_activation_function:  NONE|RELU|RELU6
266  padding: SAME|VALID
267  stride_w,stride_h: stride of the filter window
268}
269```
270
271**CONV_2D_TRANSPOSE**
272
273```
274Inputs {
275  0: output_shape
276  1: filter
277  2: 4D tensor
278}
279Outputs {
280  0: the transpose (gradient) of conv2d
281}
282Options {
283  padding: SAME|VALID
284  stride_w,stride_h: stride of the filter window
285}
286```
287
288**DEPTHWISE_CONV_2D**
289
290```
291Inputs {
292  0: 4D tensor
293  1: filter
294  2: bias (optional)
295}
296Outputs {
297  0: result of a depthwise-2D convolution of the input tensor
298}
299Options {
300  fused_activation_function:  NONE|RELU|RELU6
301  padding: SAME|VALID
302  stride_w,stride_h: stride of the filter window
303  depth_multiplier: relation between the last dimension of the input and output
304    tensors
305}
306```
307
308**ELU**
309
310```
311Inputs {
312  0: a tensor
313}
314Outputs {
315  0: a tensor equivalent to exp(features) - 1 if < 0, features otherwise.
316}
317```
318
319**EQUAL**
320
321```
322Inputs {
323  0: a tensor
324  1: a tensor
325}
326Outputs {
327  0: a tensor of type bool, true whenever an element of the first tensor is
328  equal to the corresponding element of the second tensor.
329}
330```
331
332**EXP**
333
334```
335Inputs {
336  0: tensor
337}
338Outputs {
339  0: result of computing element-wise exponential of the input tensor
340}
341```
342
343**FILL**
344
345```
346Inputs {
347  0: a 1D tensor
348  1: a 0D (scalar) tensor
349}
350Outputs {
351  0: A tensor of shape `tensor 0` filled with the value in `tensor 1`.
352}
353```
354
355**FLOOR**
356
357```
358inputs {
359  0: tensor
360}
361outputs: {
362  0: result of computing element-wise floor of the input tensor
363}
364```
365
366**FLOOR_DIV**
367
368```
369Inputs {
370  0: a tensor
371  1: a tensor
372}
373Outputs {
374  0: result of computing element-wise floor of `tensor 0` divided by `tensor 1`.
375}
376```
377
378**FLOOR_MOD**
379
380```
381Inputs {
382  0: a tensor
383  1: a tensor
384}
385Outputs {
386  0: result of computing element-wise floor of `tensor 0` modulo `tensor 1`.
387}
388```
389
390**CEIL**
391
392```
393inputs {
394  0: tensor
395}
396outputs: {
397  0: result of computing element-wise ceil of the input tensor
398}
399```
400
401**FULLY_CONNECTED**
402
403```
404Inputs {
405  0: 4D tensor
406  1: filter
407  2: bias (optional)
408}
409Outputs {
410  0: output of a fully (densely) connected layer, which connects all
411     elements in the input tensor with each element in this tensor.
412}
413Options {
414  fused_activation_function:  NONE|RELU|RELU6
415}
416```
417
418**GATHER**
419
420```
421Inputs {
422  0: params tensor
423  1: indices tensor
424  2: axis tensor (optional)
425}
426Outputs {
427  0: a tensor with same type as the params tensor.
428}
429```
430
431**GATHER_ND**
432
433```
434Inputs {
435  0: params tensor
436  1: indices tensor
437}
438Outputs {
439  0: a tensor with same type as the params tensor.
440}
441```
442
443**GREATER**
444
445```
446Inputs {
447  0: a tensor
448  1: a tensor
449}
450Outputs {
451  0: a tensor of type bool, true whenever an element of the first tensor is
452  greater than the corresponding element of the second tensor.
453}
454```
455
456**GREATER_EQUAL**
457
458```
459Inputs {
460  0: a tensor
461  1: a tensor
462}
463Outputs {
464  0: a tensor of type bool, true whenever an element of the first tensor is
465  greater than or equal to the corresponding element of the second tensor.
466}
467```
468
469**L2_NORMALIZATION**
470
471```
472Inputs {
473  0: input tensor
474}
475Outputs {
476  0: normalized tensor (along the last dimension)
477}
478Options {
479  fused_activation_function:  NONE|RELU|RELU6
480}
481```
482
483**L2_POOL_2D**
484
485```
486Inputs {
487  0: a tensor
488}
489Outputs {
490  0: a tensor equivalent to tf.sqrt(tf.nn.ave_pool(tf.square(input))
491}
492Options {
493  fused_activation_function:  NONE|RELU|RELU6
494  padding: SAME|VALID
495  stride_w,stride_h: stride of the sliding window
496  filter_width,filter_height: size of the sliding window
497}
498```
499
500**LEAKY_RELU**
501
502```
503Inputs {
504  0: a tensor
505}
506Outputs {
507  0: a tensor equivalent to max(input, input * alpha)
508}
509Options {
510  alpha: slope of the activation at x < 0 (provided alpha <= 1)
511}
512```
513
514**LESS**
515
516```
517Inputs {
518  0: a tensor
519  1: a tensor
520}
521Outputs {
522  0: a tensor of type bool, true whenever an element of the first tensor is less
523  than the corresponding element of the second tensor.
524}
525```
526
527**LESS_EQUAL**
528
529```
530Inputs {
531  0: a tensor
532  1: a tensor
533}
534Outputs {
535  0: a tensor of type bool, true whenever an element of the first tensor is less
536  than or equal to the corresponding element of the second tensor.
537}
538```
539
540**LOCAL_RESPONSE_NORMALIZATION**
541
542```
543Inputs {
544  0: a tensor
545}
546Outputs {
547  0: a tensor equivalent to tf.nn.local_response_normalization
548}
549Options {
550  radius
551  bias
552  alpha
553  beta
554}
555```
556
557**LOGICAL_OR**
558
559```
560Inputs {
561  0: a list of tensors.
562  1: a list of tensors.
563}
564Outputs {
565  0: A tensor of logical_or output tensors.
566}
567```
568
569**LOGISTIC**
570
571```
572Inputs {
573  0: a tensor
574}
575Outputs {
576  0: a tensor equivalent to 1 / (1 + exp(-input))
577}
578```
579
580**LOG**
581
582```
583Inputs {
584  0: a tensor
585}
586Outputs {
587  0: a tensor equivalent to log(input)
588}
589```
590
591**LOG_SOFTMAX**
592
593```
594Inputs {
595  0: tensor
596}
597Outputs {
598  0: tensor equivalent to logits - log(reduce_sum(exp(logits), -1))
599}
600```
601
602**MAX_POOL_2D**
603
604```
605Inputs {
606  0: a tensor
607}
608Outputs {
609  0: a tensor where each entry is the maximum of the input values in the
610     corresponding window.
611}
612Options {
613  fused_activation_function:  NONE|RELU|RELU6
614  padding: SAME|VALID
615  stride_w,stride_h: stride of the sliding window
616  filter_width,filter_height: size of the sliding window
617}
618```
619
620**MUL**
621
622```
623Inputs {
624  0: a tensor
625  1: a tensor
626}
627Outputs {
628  0: elementwise multiplication of the input tensors
629}
630Options {
631  fused_activation_function:  NONE|RELU|RELU6
632}
633```
634
635**NEG**
636
637```
638Inputs {
639  0: a tensor
640}
641Outputs {
642  0: elementwise negation of the input tensor
643}
644```
645
646**PACK**
647
648```
649Inputs {
650  0: a list of tensors.
651  1: an integer.
652}
653Outputs {
654  0: A tensor of stacked tensors.
655}
656```
657
658**PAD**
659
660```
661Inputs {
662  0: tensor
663  1: tensor
664}
665Outputs {
666  0: tensor where additional values are added before and after the contents of
667     each dimension
668}
669```
670
671**MEAN (tf.reduce_mean)**
672
673```
674Inputs {
675  0: tensor
676  1: tensor
677}
678Outputs {
679  0: tensor containing the mean of the elements
680}
681Options {
682  keep_dims: whether to retain reduced dimensions
683}
684```
685
686**NOT_EQUAL**
687
688```
689Inputs {
690  0: a tensor
691  1: a tensor
692}
693Outputs {
694  0: a tensor of type bool, true whenever an element of the first tensor is not
695  equal to the corresponding element of the second tensor.
696}
697```
698
699**POW**
700
701```
702Inputs {
703  0: a tensor
704  1: a tensor
705}
706Outputs {
707  0: elementwise pow of the input tensors
708}
709```
710
711**RANGE**
712
713```
714Inputs {
715  0: a 0D (scalar) tensor
716  1: a 0D (scalar) tensor
717  2: a 0D (scalar) tensor
718}
719Outputs {
720  0: A 1D tensor of type `dtype` defined by a sequence where `tensor 0` is the
721  start, `tensor 1` is the limit, and `tensor 2` is the delta.
722}
723Options {
724  dtype
725}
726```
727
728**RANK**
729
730```
731Inputs {
732  0: a tensor
733}
734Outputs {
735  0: a 0-D int32 Tensor representing the rank of input
736}
737```
738
739**RELU**
740
741```
742Inputs {
743  0: a tensor
744}
745Outputs {
746  0: a tensor equivalent to max(0, input)
747}
748```
749
750**RELU_N1_TO_1**
751
752```
753Inputs {
754  0: a tensor
755}
756Outputs {
757  0: a tensor equivalent to max(-1, min(input, 1)
758}
759```
760
761**RELU6**
762
763```
764Inputs {
765  0: a tensor
766}
767Outputs {
768  0: a tensor equivalent to max(0, min(input, 6)
769}
770```
771
772**RESHAPE**
773
774```
775Inputs {
776  0: a tensor
777  1: ignored
778}
779Outputs {
780  0: a tensor with the same elements as the input but with the new shape
781}
782Options {
783  new_shape
784}
785```
786
787**RESIZE_NEAREST_NEIGHBOR**
788
789```
790Inputs {
791  0: a 4D tensor
792  1: a 1D tensor with 2 elements
793}
794Outputs {
795  0: A tensor of type `tensor 0` resized according to `tensor 1` height/width values
796  using nearest neighbors interpolation.
797}
798Options {
799  align_corners
800}
801```
802
803**RSQRT**
804
805```
806Inputs {
807  0: a tensor
808}
809Outputs {
810  0: result of computing element-wise reciprocal square root of the input tensor
811}
812```
813
814**REVERSE_SEQUENCE**
815
816```
817Inputs {
818  0: a tensor
819  1: a 1-D tensor which specifies the length of sequence to be reversed in each
820  dim
821}
822Outputs {
823  0: a tensor with the same shape as the input tensor
824}
825Options {
826  seq_dim: a 0-D int tensor (scalar). The dimension which is partially
827  reversed.
828  batch_dim: a 0-D int tensor (scalar). Defaults to 0. The dimension along
829  which reversal is performed.
830}
831```
832
833**SHAPE**
834
835```
836Inputs {
837  0: a tensor
838}
839Outputs {
840  0: a 1D tensor representing the shape of the input tensor
841}
842Options {
843  out_type: the output type of the op (int32 or int64). Defaults to int32.
844}
845```
846
847**SLICE**
848
849```
850Inputs {
851  0: tensor
852  1: 1D tensor
853  2: 1D tensor
854}
855Outputs {
856  0: slice of the input tensor of the given size from the given begin index.
857}
858```
859
860**SOFTMAX**
861
862```
863Inputs {
864  0: a tensor
865}
866Outputs {
867  0: a tensor equivalent to exp(input) / tf.reduce_sum(exp(input * beta), dim),
868     where dim is always the last dimension of the input tensor.
869}
870Options {
871  beta
872}
873```
874
875**SPACE_TO_DEPTH**
876
877```
878Inputs {
879  0: a 4D tensor
880}
881Outputs {
882  0: a tensor rearranged using block_size. See tf.space_to_depth for details.
883}
884Options {
885  block_size
886}
887```
888
889**SPACE_TO_BATCH_ND**
890
891```
892Inputs {
893  0: 4D tensor
894  1: 1D tensor
895  2: 2D tensor
896}
897Outputs {
898  0: a tensor rearranged using block_shape. See tf.space_to_batch_nd for
899     details.
900}
901```
902
903**SPARSE_TO_DENSE**
904
905```
906Inputs {
907  0: 0D or 1D or 2D tensor
908  1: 1D tensor
909  2: 0D or 1D tensor
910  3: 0D tensor
911  4: a boolean value
912}
913Outputs {
914  0: Dense Tensor of shape output_shape. Has the same type as sparse_values.
915}
916```
917
918**SPLIT**
919
920```
921Inputs {
922  0: 0D tensor (axis)
923  1: tensor (input)
924}
925Outputs {
926  0-N: subtensors built from the input tensors
927}
928Options {
929  num_splits: Specifies number of outputs
930}
931```
932
933**SPLIT_V**
934
935```
936Inputs {
937  0: tensor (input)
938  1: 1-D tensor (size_splits)
939  2: 0-D tensor (axis)
940}
941Outputs {
942  0-N: subtensors built from the input tensors
943}
944Options {
945  num_splits: Specifies number of outputs
946}
947```
948
949**SQRT**
950
951```
952Inputs {
953  0: a tensor
954}
955Outputs {
956  0: result of computing element-wise square root of the input tensor
957}
958```
959
960**SQUEEZE**
961
962```
963Inputs {
964  0: tensor
965}
966Outputs {
967  0: tensor without any dimensions of size 1
968}
969Options {
970  squeeze_dims
971}
972```
973
974**STRIDED_SLICE**
975
976```
977Inputs {
978  0: tensor
979  1: 1D tensor
980  2: 1D tensor
981  3: 1D tensor
982}
983Outputs {
984  0: slice of the input tensor of the given size
985}
986Options {
987  begin_mask: mask for begin indices
988  end_mask: mask for end indices
989  shrink_axis_mask: mask that indicates which dimensions to remove
990}
991```
992
993**TOP_K**
994
995```
996Inputs {
997  0: tensor
998  1: OD tensor
999}
1000Outputs {
1001  0: k largest element along each last dimensional slice
1002  1: indices of values within the last dimension of the input ensor
1003}
1004```
1005
1006**TRANSPOSE**
1007
1008```
1009Inputs {
1010  0: tensor
1011  1: tensor
1012}
1013Outputs {
1014  0: tensor permuted according to perm
1015}
1016```
1017
1018**SELECT**
1019
1020```
1021Inputs {
1022  0: tensor
1023  1: tensor
1024  2: tensor
1025}
1026Outputs {
1027  0: tensor that contains the elementwise values of 'tensor 1' if the
1028  corresponding value of 'tensor 0' is true or the value of 'tensor 2' if false.
1029}
1030```
1031
1032**UNPACK**
1033
1034```
1035Inputs {
1036  0: a tensor.
1037  1: an integer.
1038  2: an integer.
1039}
1040Outputs {
1041  0-N: tensors of unpacked tensor.
1042}
1043```
1044
1045**WHERE**
1046
1047```
1048Inputs {
1049  0: A tensor of type bool.
1050  1: A tensor which may have the same shape as condition. If condition is rank
1051     1, x may have higher rank, but its first dimension must match the size of
1052     condition.
1053  2: A tensor with the same shape and type as x.
1054}
1055Outputs {
1056  0: A tensor with the same type and shape as x, y if they are non-None, or
1057     a tensor with shape (num_true, dim_size(condition)).
1058}
1059```
1060
1061**ZEROS_LIKE**
1062
1063```
1064Inputs {
1065  0: a tensor
1066}
1067Outputs {
1068  0: A tensor of the same shape and type as x but filled with zeros
1069}
1070```
1071
1072**FILL**
1073
1074```
1075Inputs {
1076  0: A Tensor. Must be one of the following types: int32, int64. 1-D. Represents the shape of the output tensor.
1077  1: A Tensor. 0-D (scalar). Value to fill the returned tensor.
1078}
1079Outputs {
1080  0: A tensor of the same type as value (input1).
1081}
1082```
1083
1084And these are TensorFlow Lite operations that are present but not ready for
1085custom models yet:
1086
1087*   CALL
1088*   CONCAT_EMBEDDINGS
1089*   CUSTOM
1090*   EMBEDDING_LOOKUP
1091*   EMBEDDING_LOOKUP_SPARSE
1092*   HASHTABLE_LOOKUP
1093*   LSH_PROJECTION
1094*   LSTM
1095*   RESIZE_BILINEAR
1096*   RNN
1097*   SKIP_GRAM
1098*   SVDF
1099*   TANH
1100