1# Building TensorFlow Lite Standalone Pip 2 3Many users would like to deploy TensorFlow lite interpreter and use it from 4Python without requiring the rest of TensorFlow. 5 6## Steps 7 8To build a binary wheel run this script: 9 10```sh 11sudo apt install swig libjpeg-dev zlib1g-dev python3-dev python3-numpy 12pip install numpy pybind11 13sh tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh 14``` 15 16That will print out some output and a .whl file. You can then install that 17 18```sh 19pip install --upgrade <wheel> 20``` 21 22You can also build a wheel inside docker container using make tool. For example 23the following command will cross-compile tflite-runtime package for python2.7 24and python3.7 (from Debian Buster) on Raspberry Pi: 25 26```sh 27make BASE_IMAGE=debian:buster PYTHON=python TENSORFLOW_TARGET=rpi docker-build 28make BASE_IMAGE=debian:buster PYTHON=python3 TENSORFLOW_TARGET=rpi docker-build 29``` 30 31Another option is to cross-compile for python3.5 (from Debian Stretch) on ARM64 32board: 33 34```sh 35make BASE_IMAGE=debian:stretch PYTHON=python3 TENSORFLOW_TARGET=aarch64 docker-build 36``` 37 38To build for python3.6 (from Ubuntu 18.04) on x86_64 (native to the docker 39image) run: 40 41```sh 42make BASE_IMAGE=ubuntu:18.04 PYTHON=python3 TENSORFLOW_TARGET=native docker-build 43``` 44 45In addition to the wheel there is a way to build Debian package by adding 46BUILD_DEB=y to the make command (only for python3): 47 48```sh 49make BASE_IMAGE=debian:buster PYTHON=python3 TENSORFLOW_TARGET=rpi BUILD_DEB=y docker-build 50``` 51 52## Alternative build with Bazel (experimental) 53 54There is another build steps to build a binary wheel which uses Bazel instead of 55Makefile. You don't need to install additional dependencies. 56This approach can leverage TF's ci_build.sh for ARM cross builds. 57 58### Normal build for your workstation 59 60```sh 61tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh 62``` 63 64### Optimized build for your workstation 65The output may have a compatibility issue with other machines but it gives the 66best performance for your workstation. 67 68```sh 69tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh native 70``` 71 72### Cross build for armhf Python 3.5 73 74```sh 75tensorflow/tools/ci_build/ci_build.sh PI-PYTHON3 \ 76 tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh armhf 77``` 78 79### Cross build for armhf Python 3.7 80 81```sh 82tensorflow/tools/ci_build/ci_build.sh PI-PYTHON37 \ 83 tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh armhf 84``` 85 86### Cross build for aarch64 Python 3.5 87 88```sh 89tensorflow/tools/ci_build/ci_build.sh PI-PYTHON3 \ 90 tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh aarch64 91``` 92 93### Cross build for aarch64 Python 3.8 94 95```sh 96tensorflow/tools/ci_build/ci_build.sh PI-PYTHON38 \ 97 tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh aarch64 98``` 99 100### Cross build for aarch64 Python 3.9 101 102```sh 103tensorflow/tools/ci_build/ci_build.sh PI-PYTHON39 \ 104 tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh aarch64 105``` 106 107### Native build for Windows 108 109```sh 110bash tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh windows 111``` 112 113## Enable TF OP support (Flex delegate) 114 115If you want to use TF ops with Python API, you need to enable flex support. 116You can build TFLite interpreter with flex ops support by providing 117"--define=tflite_pip_with_flex=true" to Bazel. 118 119Here are some examples. 120 121### Normal build with Flex for your workstation 122 123```sh 124CUSTOM_BAZEL_FLAGS=--define=tflite_pip_with_flex=true \ 125 tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh 126``` 127 128### Cross build with Flex for armhf Python 3.7 129 130```sh 131CI_DOCKER_EXTRA_PARAMS="-e CUSTOM_BAZEL_FLAGS=--define=tflite_pip_with_flex=true" \ 132 tensorflow/tools/ci_build/ci_build.sh PI-PYTHON37 \ 133 tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh armhf 134``` 135 136## Usage 137 138Note, unlike tensorflow this will be installed to a tflite_runtime namespace. 139You can then use the Tensorflow Lite interpreter as. 140 141```python 142from tflite_runtime.interpreter import Interpreter 143interpreter = Interpreter(model_path="foo.tflite") 144``` 145 146This currently works to build on Linux machines including Raspberry Pi. In 147the future, cross compilation to smaller SOCs like Raspberry Pi from 148bigger host will be supported. 149 150## Caveats 151 152* You cannot use TensorFlow Select ops, only TensorFlow Lite builtins. 153* Currently custom ops and delegates cannot be registered. 154 155