• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Introduction
2
3The ArmNN Delegate can be found within the ArmNN repository but it is a standalone piece of software. However,
4it makes use of the ArmNN library. For this reason we have added two options to build the delegate. The first option
5allows you to build the delegate together with the ArmNN library, the second option is a standalone build
6of the delegate.
7
8This tutorial uses an Aarch64 machine with Ubuntu 18.04 installed that can build all components
9natively (no cross-compilation required). This is to keep this guide simple.
10
111. [Dependencies](#Dependencies)
12   * [Build Tensorflow for C++](#Build Tensorflow for C++)
13   * [Build Flatbuffers](#Build Flatbuffers)
14   * [Build the Arm Compute Library](#Build the Arm Compute Library)
15   * [Build the ArmNN Library](#Build the ArmNN Library)
162. [Build the TfLite Delegate (Stand-Alone)](#Build the TfLite Delegate (Stand-Alone))
173. [Build the Delegate together with ArmNN](#Build the Delegate together with ArmNN)
184. [Integrate the ArmNN TfLite Delegate into your project](#Integrate the ArmNN TfLite Delegate into your project)
19
20# Dependencies
21
22Build Dependencies:
23 * Tensorflow and Tensorflow Lite version 2.3.1
24 * Flatbuffers 1.12.0
25 * ArmNN 20.11 or higher
26
27Required Tools:
28 * Git
29 * pip
30 * wget
31 * zip
32 * unzip
33 * cmake 3.7.0 or higher
34 * scons
35 * bazel 3.1.0
36
37Our first step is to build all the build dependencies I have mentioned above. We will have to create quite a few
38directories. To make navigation a bit easier define a base directory for the project. At this stage we can also
39install all the tools that are required during the build.
40```bash
41export BASEDIR=/home
42cd $BASEDIR
43apt-get update && apt-get install git wget unzip zip python git cmake scons
44```
45
46## Build Tensorflow for C++
47Tensorflow has a few dependencies on it's own. It requires the python packages pip3, numpy, wheel, keras_preprocessing
48and also bazel which is used to compile Tensoflow. A description on how to build bazel can be
49found [here](https://docs.bazel.build/versions/master/install-compile-source.html). There are multiple ways.
50I decided to compile from source because that should work for any platform and therefore adds the most value
51to this guide. Depending on your operating system and architecture there might be an easier way.
52```bash
53# Install the python packages
54pip3 install -U pip numpy wheel
55pip3 install -U keras_preprocessing --no-deps
56
57# Bazel has a dependency on JDK
58apt-get install openjdk-11-jdk
59# Build Bazel
60wget -O bazel-3.1.0-dist.zip https://github.com/bazelbuild/bazel/releases/download/3.1.0/bazel-3.1.0-dist.zip
61unzip -d bazel bazel-3.1.0-dist.zip
62cd bazel
63env EXTRA_BAZEL_ARGS="--host_javabase=@local_jdk//:jdk" bash ./compile.sh
64# This creates an "output" directory where the bazel binary can be found
65
66# Download Tensorflow
67cd $BASEDIR
68git clone https://github.com/tensorflow/tensorflow.git
69cd tensorflow/
70git checkout tags/v2.3.1 # Minimum version required for the delegate
71```
72Before tensorflow can be built, targets need to be defined in the `BUILD` file that can be
73found in the root directory of Tensorflow. Append the following two targets to the file:
74```
75cc_binary(
76     name = "libtensorflow_all.so",
77     linkshared = 1,
78     deps = [
79         "//tensorflow/core:framework",
80         "//tensorflow/core:tensorflow",
81         "//tensorflow/cc:cc_ops",
82         "//tensorflow/cc:client_session",
83         "//tensorflow/cc:scope",
84         "//tensorflow/c:c_api",
85     ],
86)
87cc_binary(
88     name = "libtensorflow_lite_all.so",
89     linkshared = 1,
90     deps = [
91         "//tensorflow/lite:framework",
92         "//tensorflow/lite/kernels:builtin_ops",
93     ],
94)
95```
96Now the build process can be started. When calling "configure", as below, a dialog shows up that asks the
97user to specify additional options. If you don't have any particular needs to your build, decline all
98additional options and choose default values. Building `libtensorflow_all.so` requires quite some time.
99This might be a good time to get yourself another drink and take a break.
100```bash
101PATH="$BASEDIR/bazel/output:$PATH" ./configure
102$BASEDIR/bazel/output/bazel build --define=grpc_no_ares=true --config=opt --config=monolithic --strip=always --config=noaws libtensorflow_all.so
103$BASEDIR/bazel/output/bazel build --config=opt --config=monolithic --strip=always libtensorflow_lite_all.so
104```
105
106## Build Flatbuffers
107
108Flatbuffers is a memory efficient cross-platform serialization library as
109described [here](https://google.github.io/flatbuffers/). It is used in tflite to store models and is also a dependency
110of the delegate. After downloading the right version it can be built and installed using cmake.
111```bash
112cd $BASEDIR
113wget -O flatbuffers-1.12.0.zip https://github.com/google/flatbuffers/archive/v1.12.0.zip
114unzip -d . flatbuffers-1.12.0.zip
115cd flatbuffers-1.12.0
116mkdir install && mkdir build && cd build
117# I'm using a different install directory but that is not required
118cmake .. -DCMAKE_INSTALL_PREFIX:PATH=$BASEDIR/flatbuffers-1.12.0/install
119make install
120```
121
122## Build the Arm Compute Library
123
124The ArmNN library depends on the Arm Compute Library (ACL). It provides a set of functions that are optimized for
125both Arm CPUs and GPUs. The Arm Compute Library is used directly by ArmNN to run machine learning workloads on
126Arm CPUs and GPUs.
127
128It is important to have the right version of ACL and ArmNN to make it work. Luckily, ArmNN and ACL are developed
129very closely and released together. If you would like to use the ArmNN version "20.11" you can use the same "20.11"
130version for ACL too.
131
132To build the Arm Compute Library on your platform, download the Arm Compute Library and check the branch
133out that contains the version you want to use and build it using `scons`.
134```bash
135cd $BASEDIR
136git clone https://review.mlplatform.org/ml/ComputeLibrary
137cd ComputeLibrary/
138git checkout <branch_name> # e.g. branches/arm_compute_20_11
139# The machine used for this guide only has a Neon CPU which is why I only have "neon=1" but if
140# your machine has an arm Gpu you can enable that by adding `opencl=1 embed_kernels=1 to the command below
141scons arch=arm64-v8a neon=1 extra_cxx_flags="-fPIC" benchmark_tests=0 validation_tests=0
142```
143
144## Build the ArmNN Library
145
146After building ACL we can now continue building ArmNN. To do so, download the repository and checkout the same
147version as you did for ACL. Create a build directory and use cmake to build it.
148```bash
149cd $BASEDIR
150git clone "https://review.mlplatform.org/ml/armnn"
151cd armnn
152git checkout <branch_name> # e.g. branches/armnn_20_11
153mkdir build && cd build
154# if you've got an arm Gpu add `-DARMCOMPUTECL=1` to the command below
155cmake .. -DARMCOMPUTE_ROOT=$BASEDIR/ComputeLibrary -DARMCOMPUTENEON=1 -DBUILD_UNIT_TESTS=0
156make
157```
158
159# Build the TfLite Delegate (Stand-Alone)
160
161The delegate as well as ArmNN is built using cmake. Create a build directory as usual and build the Delegate
162with the additional cmake arguments shown below
163```bash
164cd $BASEDIR/armnn/delegate && mkdir build && cd build
165cmake .. -DTENSORFLOW_LIB_DIR=$BASEDIR/tensorflow/bazel-bin \     # Directory where tensorflow libraries can be found
166         -DTENSORFLOW_ROOT=$BASEDIR/tensorflow \                  # The top directory of the tensorflow repository
167         -DTFLITE_LIB_ROOT=$BASEDIR/tensorflow/bazel-bin \        # In our case the same as TENSORFLOW_LIB_DIR
168         -DFLATBUFFERS_ROOT=$BASEDIR/flatbuffers-1.12.0/install \ # The install directory
169         -DArmnn_DIR=$BASEDIR/armnn/build \                       # Directory where the ArmNN library can be found
170         -DARMNN_SOURCE_DIR=$BASEDIR/armnn                        # The top directory of the ArmNN repository.
171                                                                  # Required are the includes for ArmNN
172make
173```
174
175To ensure that the build was successful you can run the unit tests for the delegate that can be found in
176the build directory for the delegate. [Doctest](https://github.com/onqtam/doctest) was used to create those tests. Using test filters you can
177filter out tests that your build is not configured for. In this case, because ArmNN was only built for Cpu
178acceleration (CpuAcc), we filter for all test suites that have `CpuAcc` in their name.
179```bash
180cd $BASEDIR/armnn/delegate/build
181./DelegateUnitTests --test-suite=*CpuAcc*
182```
183If you have built for Gpu acceleration as well you might want to change your test-suite filter:
184```bash
185./DelegateUnitTests --test-suite=*CpuAcc*,*GpuAcc*
186```
187
188
189# Build the Delegate together with ArmNN
190
191In the introduction it was mentioned that there is a way to integrate the delegate build into ArmNN. This is
192pretty straight forward. The cmake arguments that were previously used for the delegate have to be added
193to the ArmNN cmake arguments. Also another argument `BUILD_ARMNN_TFLITE_DELEGATE` needs to be added to
194instruct ArmNN to build the delegate as well. The new commands to build ArmNN are as follows:
195```bash
196cd $BASEDIR
197git clone "https://review.mlplatform.org/ml/armnn"
198cd armnn
199git checkout <branch_name> # e.g. branches/armnn_20_11
200mkdir build && cd build
201# if you've got an arm Gpu add `-DARMCOMPUTECL=1` to the command below
202cmake .. -DARMCOMPUTE_ROOT=$BASEDIR/ComputeLibrary \
203         -DARMCOMPUTENEON=1 \
204         -DBUILD_UNIT_TESTS=0 \
205         -DBUILD_ARMNN_TFLITE_DELEGATE=1 \
206         -DTENSORFLOW_LIB_DIR=$BASEDIR/tensorflow/bazel-bin \
207         -DTENSORFLOW_ROOT=$BASEDIR/tensorflow \
208         -DTFLITE_LIB_ROOT=$BASEDIR/tensorflow/bazel-bin \
209         -DFLATBUFFERS_ROOT=$BASEDIR/flatbuffers-1.12.0/install
210make
211```
212The delegate library can then be found in `build/armnn/delegate`.
213
214
215# Integrate the ArmNN TfLite Delegate into your project
216
217The delegate can be integrated into your c++ project by creating a TfLite Interpreter and
218instructing it to use the ArmNN delegate for the graph execution. This should look similiar
219to the following code snippet.
220```objectivec
221// Create TfLite Interpreter
222std::unique_ptr<Interpreter> armnnDelegateInterpreter;
223InterpreterBuilder(tfLiteModel, ::tflite::ops::builtin::BuiltinOpResolver())
224                  (&armnnDelegateInterpreter)
225
226// Create the ArmNN Delegate
227armnnDelegate::DelegateOptions delegateOptions(backends);
228std::unique_ptr<TfLiteDelegate, decltype(&armnnDelegate::TfLiteArmnnDelegateDelete)>
229                    theArmnnDelegate(armnnDelegate::TfLiteArmnnDelegateCreate(delegateOptions),
230                                     armnnDelegate::TfLiteArmnnDelegateDelete);
231
232// Instruct the Interpreter to use the armnnDelegate
233armnnDelegateInterpreter->ModifyGraphWithDelegate(theArmnnDelegate.get());
234```
235For further information on using TfLite Delegates
236please visit the [tensorflow website](https://www.tensorflow.org/lite/guide)
237
238