• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1<br>
2<div align="center">
3  <img src="Arm_NN_horizontal_blue.png" class="center" alt="Arm NN Logo" width="300"/>
4</div>
5
6* [Quick Start Guides](#quick-start-guides)
7* [Pre-Built Binaries](#pre-built-binaries)
8* [Software Overview](#software-overview)
9* [Get Involved](#get-involved)
10* [Contributions](#contributions)
11* [Disclaimer](#disclaimer)
12* [License](#license)
13* [Third-Party](#third-party)
14* [Build Flags](#build-flags)
15
16# Arm NN
17
18**Arm NN** is the **most performant** machine learning (ML) inference engine for Android and Linux, accelerating ML
19on **Arm Cortex-A CPUs and Arm Mali GPUs**. This ML inference engine is an open source SDK which bridges the gap
20between existing neural network frameworks and power-efficient Arm IP.
21
22Arm NN outperforms generic ML libraries due to **Arm architecture-specific optimizations** (e.g. SVE2) by utilizing
23**[Arm Compute Library (ACL)](https://github.com/ARM-software/ComputeLibrary/)**. To target Arm Ethos-N NPUs, Arm NN
24utilizes the [Ethos-N NPU Driver](https://github.com/ARM-software/ethos-n-driver-stack). For Arm Cortex-M acceleration,
25please see [CMSIS-NN](https://github.com/ARM-software/CMSIS_5).
26
27Arm NN is written using portable **C++14** and built using [CMake](https://cmake.org/) - enabling builds for a wide
28variety of target platforms, from a wide variety of host environments. **Python** developers can interface with Arm NN
29through the use of our **Arm NN TF Lite Delegate**.
30
31
32## Quick Start Guides
33**The Arm NN TF Lite Delegate provides the widest ML operator support in Arm NN** and is an easy way to accelerate
34your ML model. To start using the TF Lite Delegate, first download the **[Pre-Built Binaries](#pre-built-binaries)** for
35the latest release of Arm NN. Using a Python interpreter, you can load your TF Lite model into the Arm NN TF Lite
36Delegate and run accelerated inference. Please see this
37**[Quick Start Guide](delegate/DelegateQuickStartGuide.md)** on GitHub or this more comprehensive
38**[Arm Developer Guide](https://developer.arm.com/documentation/102561/latest/)** for information on how to accelerate
39your TF Lite model using the Arm NN TF Lite Delegate.
40
41The fastest way to integrate Arm NN into an **Android app** is by using our **Arm NN AAR (Android Archive) file with
42Android Studio**. The AAR file nicely packages up the Arm NN TF Lite Delegate, Arm NN itself and ACL; ready to be
43integrated into your Android ML application. Using the AAR allows you to benefit from the **vast operator support** of
44the Arm NN TF Lite Delegate. We held an **[Arm AI Tech Talk](https://www.youtube.com/watch?v=Zu4v0nqq2FA)** on how to
45accelerate an ML Image Segmentation app in 5 minutes using this AAR file. To download the Arm NN AAR file, please see the
46**[Pre-Built Binaries](#pre-built-binaries)** section below.
47
48We also provide Debian packages for Arm NN, which are a quick way to start using Arm NN and the TF Lite Parser
49(albeit with less ML operator support than the TF Lite Delegate). There is an installation guide available
50[here](InstallationViaAptRepository.md) which provides instructions on how to install the Arm NN Core and the TF Lite
51Parser for Ubuntu 20.04.
52
53To build Arm NN from scratch, we provide the **[Arm NN Build Tool](build-tool/README.md)**. This tool consists of
54**parameterized bash scripts** accompanied by a **Dockerfile** for building Arm NN and its dependencies, including
55**[Arm Compute Library (ACL)](https://github.com/ARM-software/ComputeLibrary/)**. This tool replaces/supersedes the
56majority of the existing Arm NN build guides as a user-friendly way to build Arm NN. The main benefit of building
57Arm NN from scratch is the ability to **exactly choose which components to build, targeted for your ML project**.<br>
58
59
60## Pre-Built Binaries
61
62| Operating System                              | Architecture-specific Release Archive (Download)                                                                                                                                                                                                                                                                                  |
63|-----------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
64| Android (AAR)                                 | [![](https://img.shields.io/badge/download-android--aar-orange)](https://github.com/ARM-software/armnn/releases/download/v23.02/armnn_delegate_jni-23.02.aar)                                                                                                                                                                     |
65| Android 10 "Q/Quince Tart" (API level 29)     | [![](https://img.shields.io/badge/download-arm64--v8.2a-blue)](https://github.com/ARM-software/armnn/releases/download/v23.02/ArmNN-android-29-arm64-v8.2-a.tar.gz) [![](https://img.shields.io/badge/download-arm64--v8a-red)](https://github.com/ARM-software/armnn/releases/download/v23.02/ArmNN-android-29-arm64-v8a.tar.gz) |
66| Android 11 "R/Red Velvet Cake" (API level 30)     | [![](https://img.shields.io/badge/download-arm64--v8.2a-blue)](https://github.com/ARM-software/armnn/releases/download/v23.02/ArmNN-android-30-arm64-v8.2-a.tar.gz) [![](https://img.shields.io/badge/download-arm64--v8a-red)](https://github.com/ARM-software/armnn/releases/download/v23.02/ArmNN-android-30-arm64-v8a.tar.gz) |
67| Android 12 "S/Snow Cone" (API level 31)     | [![](https://img.shields.io/badge/download-arm64--v8.2a-blue)](https://github.com/ARM-software/armnn/releases/download/v23.02/ArmNN-android-31-arm64-v8.2-a.tar.gz) [![](https://img.shields.io/badge/download-arm64--v8a-red)](https://github.com/ARM-software/armnn/releases/download/v23.02/ArmNN-android-31-arm64-v8a.tar.gz) |
68| Android 13 "T/Tiramisu" (API level 32)     | [![](https://img.shields.io/badge/download-arm64--v8.2a-blue)](https://github.com/ARM-software/armnn/releases/download/v23.02/ArmNN-android-32-arm64-v8.2-a.tar.gz) [![](https://img.shields.io/badge/download-arm64--v8a-red)](https://github.com/ARM-software/armnn/releases/download/v23.02/ArmNN-android-32-arm64-v8a.tar.gz) |
69| Linux                                         | [![](https://img.shields.io/badge/download-aarch64-green)](https://github.com/ARM-software/armnn/releases/download/v23.02/ArmNN-linux-aarch64.tar.gz) [![](https://img.shields.io/badge/download-x86__64-yellow)](https://github.com/ARM-software/armnn/releases/download/v23.02/ArmNN-linux-x86_64.tar.gz)                       |
70
71
72## Software Overview
73The Arm NN SDK supports ML models in **TensorFlow Lite** (TF Lite) and **ONNX** formats.
74
75**Arm NN's TF Lite Delegate** accelerates TF Lite models through **Python or C++ APIs**. Supported TF Lite operators
76are accelerated by Arm NN and any unsupported operators are delegated (fallback) to the reference TF Lite runtime -
77ensuring extensive ML operator support. **The recommended way to use Arm NN is to
78[convert your model to TF Lite format](https://www.tensorflow.org/lite/convert) and use the TF Lite Delegate.** Please
79refer to the [Quick Start Guides](#quick-start-guides) for more information on how to use the TF Lite Delegate.
80
81Arm NN also provides **TF Lite and ONNX parsers** which are C++ libraries for integrating TF Lite or ONNX models
82into your ML application. Please note that these parsers do not provide extensive ML operator coverage as compared
83to the Arm NN TF Lite Delegate.
84
85**Android** ML application developers have a number of options for using Arm NN:
86* Use our Arm NN AAR (Android Archive) file with **Android Studio** as described in the
87[Quick Start Guides](#quick-start-guides) section
88* Download and use our [Pre-Built Binaries](#pre-built-binaries) for the Android platform
89* Build Arm NN from scratch with the Android NDK using this [GitHub guide](BuildGuideAndroidNDK.md)
90
91Arm also provides an [Android-NN-Driver](https://github.com/ARM-software/android-nn-driver) which implements a
92hardware abstraction layer (HAL) for the Android NNAPI. When the Android NN Driver is integrated on an Android device,
93ML models used in Android applications will automatically be accelerated by Arm NN.
94
95For more information about the Arm NN components, please refer to our
96[documentation](https://github.com/ARM-software/armnn/wiki/Documentation).
97
98Arm NN is a key component of the [machine learning platform](https://mlplatform.org/), which is part of the
99[Linaro Machine Intelligence Initiative](https://www.linaro.org/news/linaro-announces-launch-of-machine-intelligence-initiative/).
100
101For FAQs and troubleshooting advice, see the [FAQ](docs/FAQ.md) or take a look at previous
102[GitHub Issues](https://github.com/ARM-software/armnn/issues).
103
104
105## Get Involved
106The best way to get involved is by using our software. If you need help or encounter an issue, please raise it as a
107[GitHub Issue](https://github.com/ARM-software/armnn/issues). Feel free to have a look at any of our open issues too.
108We also welcome feedback on our documentation.
109
110Feature requests without a volunteer to implement them are closed, but have the 'Help wanted' label, these can be
111found [here](https://github.com/ARM-software/armnn/issues?q=is%3Aissue+label%3A%22Help+wanted%22+).
112Once you find a suitable Issue, feel free to re-open it and add a comment, so that Arm NN engineers know you are
113working on it and can help.
114
115When the feature is implemented the 'Help wanted' label will be removed.
116
117
118## Contributions
119The Arm NN project welcomes contributions. For more details on contributing to Arm NN please see the
120[Contributing page](https://mlplatform.org/contributing/) on the [MLPlatform.org](https://mlplatform.org/) website,
121or see the [Contributor Guide](CONTRIBUTING.md).
122
123Particularly if you'd like to implement your own backend next to our CPU, GPU and NPU backends there are guides for
124backend development: [Backend development guide](src/backends/README.md),
125[Dynamic backend development guide](src/dynamic/README.md).
126
127
128## Disclaimer
129The armnn/tests directory contains tests used during Arm NN development. Many of them depend on third-party IP, model
130protobufs and image files not distributed with Arm NN. The dependencies for some tests are available freely on
131the Internet, for those who wish to experiment, but they won't run out of the box.
132
133
134## License
135Arm NN is provided under the [MIT](https://spdx.org/licenses/MIT.html) license.
136See [LICENSE](LICENSE) for more information. Contributions to this project are accepted under the same license.
137
138Individual files contain the following tag instead of the full license text.
139
140    SPDX-License-Identifier: MIT
141
142This enables machine processing of license information based on the SPDX License Identifiers that are available
143here: http://spdx.org/licenses/
144
145
146## Inclusive language commitment
147Arm NN conforms to Arm's inclusive language policy and, to the best of our knowledge, does not contain any non-inclusive language.
148
149If you find something that concerns you, please email terms@arm.com
150
151
152## Third-party
153Third party tools used by Arm NN:
154
155| Tool           | License (SPDX ID) | Description                    | Version | Provenience
156|----------------|-------------------|------------------------------------------------------------------|-------------|-------------------
157| cxxopts        | MIT               | A lightweight C++ option parser library | SHA 12e496da3d486b87fa9df43edea65232ed852510 | https://github.com/jarro2783/cxxopts
158| doctest        | MIT               | Header-only C++ testing framework | 2.4.6 | https://github.com/onqtam/doctest
159| fmt            | MIT               | {fmt} is an open-source formatting library providing a fast and safe alternative to C stdio and C++ iostreams. | 7.0.1 | https://github.com/fmtlib/fmt
160| ghc            | MIT               | A header-only single-file std::filesystem compatible helper library | 1.3.2 | https://github.com/gulrak/filesystem
161| half           | MIT               | IEEE 754 conformant 16-bit half-precision floating point library | 1.12.0 | http://half.sourceforge.net
162| mapbox/variant | BSD               | A header-only alternative to 'boost::variant' | 1.1.3 | https://github.com/mapbox/variant
163| stb            | MIT               | Image loader, resize and writer | 2.16 | https://github.com/nothings/stb
164
165
166## Build Flags
167Arm NN uses the following security related build flags in their code:
168
169| Build flags	      |
170|---------------------|
171| -Wall	              |
172| -Wextra             |
173| -Wold-style-cast    |
174| -Wno-missing-braces |
175| -Wconversion        |
176| -Wsign-conversion   |
177| -Werror             |
178