1/// Copyright (c) 2020 ARM Limited. 2/// 3/// SPDX-License-Identifier: MIT 4/// 5/// Permission is hereby granted, free of charge, to any person obtaining a copy 6/// of this software and associated documentation files (the "Software"), to deal 7/// in the Software without restriction, including without limitation the rights 8/// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9/// copies of the Software, and to permit persons to whom the Software is 10/// furnished to do so, subject to the following conditions: 11/// 12/// The above copyright notice and this permission notice shall be included in all 13/// copies or substantial portions of the Software. 14/// 15/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21/// SOFTWARE. 22/// 23 24namespace armnn 25{ 26/** 27@page parsers Parsers 28 29@tableofcontents 30@section S4_caffe_parser ArmNN Caffe Parser 31 32`armnnCaffeParser` is a library for loading neural networks defined in Caffe protobuf files into the Arm NN runtime. 33 34##Caffe layers supported by the Arm NN SDK 35This reference guide provides a list of Caffe layers the Arm NN SDK currently supports. 36 37## Although some other neural networks might work, Arm tests the Arm NN SDK with Caffe implementations of the following neural networks: 38 39- AlexNet. 40- Cifar10. 41- Inception-BN. 42- Resnet_50, Resnet_101 and Resnet_152. 43- VGG_CNN_S, VGG_16 and VGG_19. 44- Yolov1_tiny. 45- Lenet. 46- MobileNetv1. 47 48## The Arm NN SDK supports the following machine learning layers for Caffe networks: 49 50- BatchNorm, in inference mode. 51- Convolution, excluding the Dilation Size, Weight Filler, Bias Filler, Engine, Force nd_im2col, and Axis parameters. 52 Caffe doesn't support depthwise convolution, the equivalent layer is implemented through the notion of groups. ArmNN supports groups this way: 53 - when group=1, it is a normal conv2d 54 - when group=#input_channels, we can replace it by a depthwise convolution 55 - when group>1 && group<#input_channels, we need to split the input into the given number of groups, apply a separate convolution and then merge the results 56- Concat, along the channel dimension only. 57- Dropout, in inference mode. 58- Element wise, excluding the coefficient parameter. 59- Inner Product, excluding the Weight Filler, Bias Filler, Engine, and Axis parameters. 60- Input. 61- Local Response Normalisation (LRN), excluding the Engine parameter. 62- Pooling, excluding the Stochastic Pooling and Engine parameters. 63- ReLU. 64- Scale. 65- Softmax, excluding the Axis and Engine parameters. 66- Split. 67 68More machine learning layers will be supported in future releases. 69 70Please note that certain deprecated Caffe features are not supported by the armnnCaffeParser. If you think that Arm NN should be able to load your model according to the list of supported layers, but you are getting strange error messages, then try upgrading your model to the latest format using Caffe, either by saving it to a new file or using the upgrade utilities in `caffe/tools`. 71<br/><br/><br/><br/> 72 73@section S5_onnx_parser ArmNN Onnx Parser 74 75`armnnOnnxParser` is a library for loading neural networks defined in ONNX protobuf files into the Arm NN runtime. 76 77## ONNX operators that the Arm NN SDK supports 78 79This reference guide provides a list of ONNX operators the Arm NN SDK currently supports. 80 81The Arm NN SDK ONNX parser currently only supports fp32 operators. 82 83## Fully supported 84 85- Add 86 - See the ONNX [Add documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Add) for more information 87 88- AveragePool 89 - See the ONNX [AveragePool documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#AveragePool) for more information. 90 91- Constant 92 - See the ONNX [Constant documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Constant) for more information. 93 94- Clip 95 - See the ONNX [Clip documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Clip) for more information. 96 97- Flatten 98 - See the ONNX [Flatten documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Flatten) for more information. 99 100- GlobalAveragePool 101 - See the ONNX [GlobalAveragePool documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#GlobalAveragePool) for more information. 102 103- LeakyRelu 104 - See the ONNX [LeakyRelu documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#LeakyRelu) for more information. 105 106- MaxPool 107 - See the ONNX [max_pool documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#MaxPool) for more information. 108 109- Relu 110 - See the ONNX [Relu documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Relu) for more information. 111 112- Reshape 113 - See the ONNX [Reshape documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Reshape) for more information. 114 115- Sigmoid 116 - See the ONNX [Sigmoid documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Sigmoid) for more information. 117 118- Tanh 119 - See the ONNX [Tanh documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Tanh) for more information. 120 121 122## Partially supported 123 124- Conv 125 - The parser only supports 2D convolutions with a dilation rate of [1, 1] and group = 1 or group = #Nb_of_channel (depthwise convolution) 126 See the ONNX [Conv documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Conv) for more information. 127- BatchNormalization 128 - The parser does not support training mode. See the ONNX [BatchNormalization documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#BatchNormalization) for more information. 129- MatMul 130 - The parser only supports constant weights in a fully connected layer. 131 132## Tested networks 133 134Arm tested these operators with the following ONNX fp32 neural networks: 135- Simple MNIST. See the ONNX [MNIST documentation](https://github.com/onnx/models/tree/master/mnist) for more information. 136- Mobilenet_v2. See the ONNX [MobileNet documentation](https://github.com/onnx/models/tree/master/models/image_classification/mobilenet) for more information. 137 138More machine learning operators will be supported in future releases. 139<br/><br/><br/><br/> 140 141@section S6_tf_lite_parser ArmNN Tf Lite Parser 142 143`armnnTfLiteParser` is a library for loading neural networks defined by TensorFlow Lite FlatBuffers files 144into the Arm NN runtime. 145 146## TensorFlow Lite operators that the Arm NN SDK supports 147 148This reference guide provides a list of TensorFlow Lite operators the Arm NN SDK currently supports. 149 150## Fully supported 151 152The Arm NN SDK TensorFlow Lite parser currently supports the following operators: 153 154- ADD 155- AVERAGE_POOL_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE 156- BATCH_TO_SPACE 157- CONCATENATION, Supported Fused Activation: RELU , RELU6 , TANH, NONE 158- CONV_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE 159- DEPTHWISE_CONV_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE 160- DIV 161- EXP 162- FULLY_CONNECTED, Supported Fused Activation: RELU , RELU6 , TANH, NONE 163- LOGISTIC 164- L2_NORMALIZATION 165- LEAKY_RELU 166- MAX_POOL_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE 167- MAXIMUM 168- MEAN 169- MINIMUM 170- MUL 171- NEG 172- PACK 173- PAD 174- RELU 175- RELU6 176- RESHAPE 177- RESIZE_BILINEAR 178- SLICE 179- SOFTMAX 180- SPACE_TO_BATCH 181- SPLIT 182- SPLIT_V 183- SQUEEZE 184- STRIDED_SLICE 185- SUB 186- TANH 187- TRANSPOSE 188- TRANSPOSE_CONV 189- UNPACK 190 191## Custom Operator 192 193- TFLite_Detection_PostProcess 194 195## Tested networks 196 197Arm tested these operators with the following TensorFlow Lite neural network: 198- [Quantized MobileNet](http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224_quant.tgz) 199- [Quantized SSD MobileNet](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_quantized_300x300_coco14_sync_2018_07_18.tar.gz) 200- DeepSpeech v1 converted from [TensorFlow model](https://github.com/mozilla/DeepSpeech/releases/tag/v0.4.1) 201- DeepSpeaker 202 203More machine learning operators will be supported in future releases. 204<br/><br/><br/><br/> 205 206@section S7_tf_parser ArmNN Tensorflow Parser 207 208`armnnTfParser` is a library for loading neural networks defined by TensorFlow protobuf files into the Arm NN runtime. 209 210## TensorFlow operators that the Arm NN SDK supports 211 212This reference guide provides a list of TensorFlow operators the Arm NN SDK currently supports. 213 214The Arm NN SDK TensorFlow parser currently only supports fp32 operators. 215 216## Fully supported 217 218- avg_pool 219 - See the TensorFlow [avg_pool documentation](https://www.tensorflow.org/api_docs/python/tf/nn/avg_pool) for more information. 220- bias_add 221 - See the TensorFlow [bias_add documentation](https://www.tensorflow.org/api_docs/python/tf/nn/bias_add) for more information. 222- conv2d 223 - See the TensorFlow [conv2d documentation](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d) for more information. 224- expand_dims 225 - See the TensorFlow [expand_dims documentation](https://www.tensorflow.org/api_docs/python/tf/expand_dims) for more information. 226- gather 227 - See the TensorFlow [gather documentation](https://www.tensorflow.org/api_docs/python/tf/gather) for more information. 228- identity 229 - See the TensorFlow [identity documentation](https://www.tensorflow.org/api_docs/python/tf/identity) for more information. 230- local_response_normalization 231 - See the TensorFlow [local_response_normalization documentation](https://www.tensorflow.org/api_docs/python/tf/nn/local_response_normalization) for more information. 232- max_pool 233 - See the TensorFlow [max_pool documentation](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool) for more information. 234- placeholder 235 - See the TensorFlow [placeholder documentation](https://www.tensorflow.org/api_docs/python/tf/placeholder) for more information. 236- reduce_mean 237 - See the TensorFlow [reduce_mean documentation](https://www.tensorflow.org/api_docs/python/tf/reduce_mean) for more information. 238- relu 239 - See the TensorFlow [relu documentation](https://www.tensorflow.org/api_docs/python/tf/nn/relu) for more information. 240- relu6 241 - See the TensorFlow [relu6 documentation](https://www.tensorflow.org/api_docs/python/tf/nn/relu6) for more information. 242- rsqrt 243 - See the TensorFlow [rsqrt documentation](https://www.tensorflow.org/api_docs/python/tf/math/rsqrt) for more information. 244- shape 245 - See the TensorFlow [shape documentation](https://www.tensorflow.org/api_docs/python/tf/shape) for more information. 246- sigmoid 247 - See the TensorFlow [sigmoid documentation](https://www.tensorflow.org/api_docs/python/tf/sigmoid) for more information. 248- softplus 249 - See the TensorFlow [softplus documentation](https://www.tensorflow.org/api_docs/python/tf/nn/softplus) for more information. 250- squeeze 251 - See the TensorFlow [squeeze documentation](https://www.tensorflow.org/api_docs/python/tf/squeeze) for more information. 252- tanh 253 - See the TensorFlow [tanh documentation](https://www.tensorflow.org/api_docs/python/tf/tanh) for more information. 254 255## Partially supported 256 257- add 258 - The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of scalars and 1D tensors. See the TensorFlow [add operator documentation](https://www.tensorflow.org/api_docs/python/tf/add) for more information. 259- add_n 260 - The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of scalars and 1D tensors. See the TensorFlow [add operator documentation](https://www.tensorflow.org/api_docs/python/tf/add_n) for more information. 261- concat 262 - Arm NN supports concatenation along the channel dimension for data formats NHWC and NCHW. 263- constant 264 - The parser does not support the optional `shape` argument. It always infers the shape of the output tensor from `value`. See the TensorFlow [constant documentation](https://www.tensorflow.org/api_docs/python/tf/constant) for further information. 265- depthwise_conv2d_native 266 - The parser only supports a dilation rate of (1,1,1,1). See the TensorFlow [depthwise_conv2d_native documentation](https://www.tensorflow.org/api_docs/python/tf/nn/depthwise_conv2d_native) for more information. 267- equal 268 - The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of 4D and 1D tensors. See the TensorFlow [equal operator documentation](https://www.tensorflow.org/api_docs/python/tf/math/equal) for more information. 269- fused_batch_norm 270 - The parser does not support training outputs. See the TensorFlow [fused_batch_norm documentation](https://www.tensorflow.org/api_docs/python/tf/nn/fused_batch_norm) for more information. 271- greater 272 - The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of 4D and 1D tensors. See the TensorFlow [greater operator documentation](https://www.tensorflow.org/api_docs/python/tf/math/greater) for more information. 273- matmul 274 - The parser only supports constant weights in a fully connected layer. See the TensorFlow [matmul documentation](https://www.tensorflow.org/api_docs/python/tf/matmul) for more information. 275- maximum 276 where maximum is used in one of the following ways 277 - max(mul(a, x), x) 278 - max(mul(x, a), x) 279 - max(x, mul(a, x)) 280 - max(x, mul(x, a) 281 This is interpreted as a ActivationLayer with a LeakyRelu activation function. Any other usage of max will result in the insertion of a simple maximum layer. The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting). See the TensorFlow [maximum documentation](https://www.tensorflow.org/api_docs/python/tf/maximum) for more information. 282- minimum 283 - The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of 4D and 1D tensors. See the TensorFlow [minimum operator documentation](https://www.tensorflow.org/api_docs/python/tf/math/minimum) for more information. 284- multiply 285 - The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of scalars and 1D tensors. See the TensorFlow [multiply documentation](https://www.tensorflow.org/api_docs/python/tf/multiply) for more information. 286- pad 287 - Only supports tf.pad function with mode = 'CONSTANT' and constant_values = 0. See the TensorFlow [pad documentation](https://www.tensorflow.org/api_docs/python/tf/pad) for more information. 288- realdiv 289 - The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of scalars and 1D tensors. See the TensorFlow [realdiv documentation](https://www.tensorflow.org/api_docs/python/tf/realdiv) for more information. 290- reshape 291 - The parser does not support reshaping to or from 4D. See the TensorFlow [reshape documentation](https://www.tensorflow.org/api_docs/python/tf/reshape) for more information. 292- resize_images 293 - The parser only supports `ResizeMethod.BILINEAR` with `align_corners=False`. See the TensorFlow [resize_images documentation](https://www.tensorflow.org/api_docs/python/tf/image/resize_images) for more information. 294- softmax 295 - The parser only supports 2D inputs and does not support selecting the `softmax` dimension. See the TensorFlow [softmax documentation](https://www.tensorflow.org/api_docs/python/tf/nn/softmax) for more information. 296- split 297 - Arm NN supports split along the channel dimension for data formats NHWC and NCHW. 298- subtract 299 - The parser does not support all forms of broadcasting [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of scalars and 1D tensors. See the TensorFlow [subtract documentation](https://www.tensorflow.org/api_docs/python/tf/math/subtract) for more information. 300 301## Tested networks 302 303Arm tests these operators with the following TensorFlow fp32 neural networks: 304- Lenet 305- mobilenet_v1_1.0_224. The Arm NN SDK only supports the non-quantized version of the network. See the [MobileNet_v1 documentation](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md) for more information on quantized networks. 306- inception_v3. The Arm NN SDK only supports the official inception_v3 transformed model. See the TensorFlow documentation on [preparing models for mobile deployment](https://www.tensorflow.org/mobile/prepare_models) for more information on how to transform the inception_v3 network. 307 308Using these datasets: 309- Cifar10 310- Simple MNIST. For more information check out the [tutorial](https://developer.arm.com/technologies/machine-learning-on-arm/developer-material/how-to-guides/deploying-a-tensorflow-mnist-model-on-arm-nn) on the Arm Developer portal. 311 312More machine learning operators will be supported in future releases. 313 314**/ 315} 316 317