1# Object Detection Example 2 3## Introduction 4This is a sample code showing object detection using Arm NN in two different modes: 51. Utilizing public Arm NN C++ API. 62. Utilizing Tensorflow lite delegate file mechanism together with Armnn delegate file. 7 8The compiled application can take 9 10 * a video file 11 12as input and 13 * save a video file 14 * or output video stream to the window 15 16with detections shown in bounding boxes, class labels and confidence. 17 18## Dependencies 19 20This example utilizes OpenCV functions to capture and output video data. 211. Public Arm NN C++ API is provided by Arm NN library. 222. For Delegate file mode following dependencies exist: 232.1 Tensorflow version 2.10 242.2 Flatbuffers version 2.0.6 252.3 Arm NN delegate library 26 27## System 28 29This example was created on Ubuntu 20.04 with gcc and g++ version 9. 30If encountered any compiler errors while running with a different compiler version, you can install version 9 with: 31```commandline 32sudo apt install gcc-9 g++-9 33``` 34and add to every cmake command those compiler flags: 35-DCMAKE_C_COMPILER=gcc-9 -DCMAKE_CXX_COMPILER=g++-9 36 37### Arm NN 38 39Object detection example build system does not trigger Arm NN compilation. Thus, before building the application, 40please ensure that Arm NN libraries and header files are available on your build platform. 41The application executable binary dynamically links with the following Arm NN libraries: 42* libarmnn.so 43For Arm NN public C++ API mode: 44* libarmnnTfLiteParser.so 45For Delegate file mode: 46* libarmnnDelegate.so 47 48Pre compiled Arm NN libraries can be downloaded from https://github.com/ARM-software/armnn/releases/download/v21.11/ArmNN-linux-aarch64.tar.gz 49the "lib" and "include" directories should be taken together. 50 51The build script searches for available Arm NN libraries in the following order: 521. Inside custom user directory specified by ARMNN_LIB_DIR cmake option. 532. Inside the current Arm NN repository, assuming that Arm NN was built following [this instructions](../../BuildGuideCrossCompilation.md). 543. Inside default locations for system libraries, assuming Arm NN was installed from deb packages. 55 56Arm NN header files will be searched in parent directory of found libraries files under `include` directory, i.e. 57libraries found in `/usr/lib` or `/usr/lib64` and header files in `/usr/include` (or `${ARMNN_LIB_DIR}/include`). 58 59Please see [find_armnn.cmake](./cmake/find_armnn.cmake) for implementation details. 60 61### OpenCV 62 63This application uses [OpenCV (Open Source Computer Vision Library)](https://opencv.org/) for video stream processing. 64Your host platform may have OpenCV available through linux package manager. If this is the case, please install it using standard way. 65```commandline 66sudo apt install python3-opencv 67``` 68If not, our build system has a script to download and cross-compile required OpenCV modules 69as well as [FFMPEG](https://ffmpeg.org/) and [x264 encoder](https://www.videolan.org/developers/x264.html) libraries. 70The latter will build limited OpenCV functionality and application will support only video file input and video file output 71way of working. Displaying video frames in a window requires building OpenCV with GTK and OpenGL support. 72 73The application executable binary dynamically links with the following OpenCV libraries: 74* libopencv_core.so.4.0.0 75* libopencv_imgproc.so.4.0.0 76* libopencv_imgcodecs.so.4.0.0 77* libopencv_videoio.so.4.0.0 78* libopencv_video.so.4.0.0 79* libopencv_highgui.so.4.0.0 80 81and transitively depends on: 82* libavcodec.so (FFMPEG) 83* libavformat.so (FFMPEG) 84* libavutil.so (FFMPEG) 85* libswscale.so (FFMPEG) 86* libx264.so (x264) 87 88The application searches for above libraries in the following order: 891. Inside custom user directory specified by OPENCV_LIB_DIR cmake option. 902. Inside default locations for system libraries. 91 92If no OpenCV libraries were found, the cross-compilation build is extended with x264, ffmpeg and OpenCV compilation steps. 93 94Note: Native build does not add third party libraries to compilation. 95 96Please see [find_opencv.cmake](./cmake/find_opencv.cmake) for implementation details. 97 98### Tensorflow Lite (Needed only in delegate file mode) 99 100This application uses [Tensorflow Lite)](https://www.tensorflow.org/) version 2.10 for demonstrating use of 'armnnDelegate'. 101armnnDelegate is a library for accelerating certain TensorFlow Lite operators on Arm hardware by providing 102the TensorFlow Lite interpreter with an alternative implementation of the operators via its delegation mechanism. 103You may clone and build Tensorflow lite and provide the path to its root and output library directories through the cmake 104flags TENSORFLOW_ROOT and TFLITE_LIB_ROOT respectively. 105For implementation details see the scripts FindTfLite.cmake and FindTfLiteSrc.cmake 106 107The application links with the Tensorflow lite library libtensorflow-lite.a 108 109#### Download and build Tensorflow Lite version 2.10 110Example for Tensorflow Lite native compilation 111```commandline 112sudo apt install build-essential 113git clone https://github.com/tensorflow/tensorflow.git 114cd tensorflow/tensorflow 115git checkout 359c3cdfc5fabac82b3c70b3b6de2b0a8c16874f #Tensorflow 2.10 116mkdir build && cd build 117cmake ../lite -DTFLITE_ENABLE_XNNPACK=OFF 118make 119``` 120 121### Flatbuffers (needed only in delegate file mode) 122 123This application uses [Flatbuffers)](https://google.github.io/flatbuffers/) version 1.12.0 for serialization 124You may clone and build Flatbuffers and provide the path to its root directory through the cmake 125flag FLATBUFFERS_ROOT. 126Please see [FindFlatbuffers.cmake] for implementation details. 127 128The application links with the Flatbuffers library libflatbuffers.a 129 130#### Download and build flatbuffers version 2.0.6 131Example for flatbuffer native compilation 132```commandline 133wget https://github.com/google/flatbuffers/archive/v2.0.6.tar.gz 134tar xf v2.0.6.tar.gz 135cd flatbuffers-2.0.6 136mkdir install && cd install 137cmake .. -DCMAKE_INSTALL_PREFIX:PATH=`pwd` 138make install 139``` 140 141## Building 142There are two flows for building this application: 143* native build on a host platform, 144* cross-compilation for a Arm-based host platform. 145 146### Build Options 147 148* CMAKE_TOOLCHAIN_FILE - choose one of the available cross-compilation toolchain files: 149 * `cmake/aarch64-toolchain.cmake` 150 * `cmake/arm-linux-gnueabihf-toolchain.cmake` 151* ARMNN_LIB_DIR - point to the custom location of the Arm NN libs and headers. 152* OPENCV_LIB_DIR - point to the custom location of the OpenCV libs and headers. 153* BUILD_UNIT_TESTS - set to `1` to build tests. Additionally to the main application, `object_detection_example-tests` 154unit tests executable will be created. 155 156* For the Delegate file mode: 157* USE_ARMNN_DELEGATE - set to True to build the application with Tflite and delegate file mode. default is False. 158* TFLITE_LIB_ROOT - point to the custom location of Tflite lib 159* TENSORFLOW_ROOT - point to the custom location of Tensorflow root directory 160* FLATBUFFERS_ROOT - point to the custom location of Flatbuffers root directory 161 162### Native Build 163To build this application on a host platform, firstly ensure that required dependencies are installed: 164For example, for raspberry PI: 165```commandline 166sudo apt-get update 167sudo apt-get -yq install pkg-config 168sudo apt-get -yq install libgtk2.0-dev zlib1g-dev libjpeg-dev libpng-dev libxvidcore-dev libx264-dev 169sudo apt-get -yq install libavcodec-dev libavformat-dev libswscale-dev ocl-icd-opencl-dev 170``` 171 172To build demo application, create a build directory: 173```commandline 174mkdir build 175cd build 176``` 177If you have already installed Arm NN and OpenCV: 178 179Inside build directory, run cmake and make commands: 180```commandline 181cmake .. 182make 183``` 184This will build the following in bin directory: 185* object_detection_example - application executable 186 187If you have custom Arm NN and OpenCV location, use `OPENCV_LIB_DIR` and `ARMNN_LIB_DIR` options: 188```commandline 189cmake -DARMNN_LIB_DIR=/path/to/armnn -DOPENCV_LIB_DIR=/path/to/opencv .. 190make 191``` 192 193If you have build with Delegate file mode and have custom Arm NN, Tflite, and Flatbuffers locations, 194use the USE_ARMNN_DELEGATE flag together with `TFLITE_LIB_ROOT`, `TENSORFLOW_ROOT`, `FLATBUFFERS_ROOT` and 195`ARMNN_LIB_DIR` options: 196```commandline 197cmake -DARMNN_LIB_DIR=/path/to/armnn/build/lib/ -DUSE_ARMNN_DELEGATE=True -DTFLITE_LIB_ROOT=/path/to/tensorflow/ 198 -DTENSORFLOW_ROOT=/path/to/tensorflow/ -DFLATBUFFERS_ROOT=/path/to/flatbuffers/ .. 199make 200``` 201 202### Cross-compilation 203 204This section will explain how to cross-compile the application and dependencies on a Linux x86 machine 205for arm host platforms. 206 207You will require working cross-compilation toolchain supported by your host platform. For raspberry Pi 3 and 4 with glibc 208runtime version 2.28, the following toolchains were successfully used: 209* https://releases.linaro.org/components/toolchain/binaries/latest-7/aarch64-linux-gnu/ 210* https://releases.linaro.org/components/toolchain/binaries/latest-7/arm-linux-gnueabihf/ 211 212Choose aarch64-linux-gnu if `lscpu` command shows architecture as aarch64 or arm-linux-gnueabihf if detected 213architecture is armv71. 214 215You can check runtime version on your host platform by running: 216``` 217ldd --version 218``` 219On **build machine**, install C and C++ cross compiler toolchains and add them to the PATH variable. 220 221Install package dependencies: 222```commandline 223sudo apt-get update 224sudo apt-get -yq install pkg-config 225``` 226Package config is required by OpenCV build to discover FFMPEG libs. 227 228To build demo application, create a build directory: 229```commandline 230mkdir build 231cd build 232``` 233Inside build directory, run cmake and make commands: 234 235**Arm 32bit** 236```commandline 237cmake -DARMNN_LIB_DIR=<path-to-armnn-libs> -DCMAKE_TOOLCHAIN_FILE=cmake/arm-linux-gnueabihf-toolchain.cmake .. 238make 239``` 240**Arm 64bit** 241```commandline 242cmake -DARMNN_LIB_DIR=<path-to-armnn-libs> -DCMAKE_TOOLCHAIN_FILE=cmake/aarch64-toolchain.cmake .. 243make 244``` 245 246Add `-j` flag to the make command to run compilation in multiple threads. 247 248From the build directory, copy the following to the host platform: 249* bin directory - contains object_detection_example executable, 250* lib directory - contains cross-compiled OpenCV, ffmpeg, x264 libraries, 251* Your Arm NN libs used during compilation. 252 253The full list of libs after cross-compilation to copy on your board: 254``` 255libarmnn.so 256libarmnn.so.31 257libarmnn.so.31.0 258For Arm NN public C++ API mode: 259libarmnnTfLiteParser.so 260libarmnnTfLiteParser.so.24.4 261end 262For Delegate file mode: 263libarmnnDelegate.so 264libarmnnDelegate.so.25 265libarmnnDelegate.so.25.0 266libtensorflow-lite.a 267libflatbuffers.a 268end 269 270libavcodec.so 271libavcodec.so.58 272libavcodec.so.58.54.100 273libavdevice.so 274libavdevice.so.58 275libavdevice.so.58.8.100 276libavfilter.so 277libavfilter.so.7 278libavfilter.so.7.57.100 279libavformat.so 280libavformat.so.58 281libavformat.so.58.29.100 282libavutil.so 283libavutil.so.56 284libavutil.so.56.31.100 285libopencv_core.so 286libopencv_core.so.4.0 287libopencv_core.so.4.0.0 288libopencv_highgui.so 289libopencv_highgui.so.4.0 290libopencv_highgui.so.4.0.0 291libopencv_imgcodecs.so 292libopencv_imgcodecs.so.4.0 293libopencv_imgcodecs.so.4.0.0 294libopencv_imgproc.so 295libopencv_imgproc.so.4.0 296libopencv_imgproc.so.4.0.0 297libopencv_video.so 298libopencv_video.so.4.0 299libopencv_video.so.4.0.0 300libopencv_videoio.so 301libopencv_videoio.so.4.0 302libopencv_videoio.so.4.0.0 303libpostproc.so 304libpostproc.so.55 305libpostproc.so.55.5.100 306libswresample.a 307libswresample.so 308libswresample.so.3 309libswresample.so.3.5.100 310libswscale.so 311libswscale.so.5 312libswscale.so.5.5.100 313libx264.so 314libx264.so.160 315``` 316## Executing 317 318Once the application executable is built, it can be executed with the following options: 319* --video-file-path: Path to the video file to run object detection on **[REQUIRED]** 320* --model-file-path: Path to the Object Detection model to use **[REQUIRED]** 321* --label-path: Path to the label set for the provided model file **[REQUIRED]** 322* --model-name: The name of the model being used. Accepted options: SSD_MOBILE | YOLO_V3_TINY **[REQUIRED]** 323* --output-video-file-path: Path to the output video file with detections added in. Defaults to /tmp/output.avi 324 **[OPTIONAL]** 325* --preferred-backends: Takes the preferred backends in preference order, separated by comma. 326 For example: CpuAcc,GpuAcc,CpuRef. Accepted options: [CpuAcc, CpuRef, GpuAcc]. 327 Defaults to CpuRef **[OPTIONAL]** 328* --profiling_enabled: Enabling this option will print important ML related milestones timing 329 information in micro-seconds. By default, this option is disabled. 330 Accepted options are true/false **[OPTIONAL]** 331 332### Object Detection on a supplied video file 333 334To run object detection on a supplied video file and output result to a video file: 335```commandline 336LD_LIBRARY_PATH=/path/to/armnn/libs:/path/to/opencv/libs ./object_detection_example --label-path /path/to/labels/file 337 --video-file-path /path/to/video/file --model-file-path /path/to/model/file 338 --model-name [YOLO_V3_TINY | SSD_MOBILE] --output-video-file-path /path/to/output/file 339``` 340 341To run object detection on a supplied video file and output result to a window gui: 342```commandline 343LD_LIBRARY_PATH=/path/to/armnn/libs:/path/to/opencv/libs ./object_detection_example --label-path /path/to/labels/file 344 --video-file-path /path/to/video/file --model-file-path /path/to/model/file 345 --model-name [YOLO_V3_TINY | SSD_MOBILE] 346``` 347 348This application has been verified to work against the MobileNet SSD and the YOLO V3 tiny models, which can be downloaded along with their label sets from the Arm Model Zoo: 349* https://github.com/ARM-software/ML-zoo/tree/master/models/object_detection/ssd_mobilenet_v1 350* https://github.com/ARM-software/ML-zoo/tree/master/models/object_detection/yolo_v3_tiny 351 352--- 353 354# Application Overview 355This section provides a walkthrough of the application, explaining in detail the steps: 3561. Initialisation 357 1. Reading from Video Source 358 2. Preparing Labels and Model Specific Functions 3592. Creating a Network (two modes are available) 360 a. Armnn C++ API mode: 361 1. Creating Parser and Importing Graph 362 2. Optimizing Graph for Compute Device 363 3. Creating Input and Output Binding Information 364 b. using Tflite and delegate file mode: 365 1. Building a Model and creating Interpreter 366 2. Creating Arm NN delegate file 367 3. Registering the Arm NN delegate file to the Interpreter 3683. Object detection pipeline 369 1. Pre-processing the Captured Frame 370 2. Making Input and Output Tensors 371 3. Executing Inference 372 4. Postprocessing 373 5. Decoding and Processing Inference Output 374 6. Drawing Bounding Boxes 375 376 377### Initialisation 378 379##### Reading from Video Source 380After parsing user arguments, the chosen video file or stream is loaded into an OpenCV `cv::VideoCapture` object. 381We use [`IFrameReader`](./include/IFrameReader.hpp) interface and OpenCV specific implementation 382[`CvVideoFrameReader`](./include/CvVideoFrameReader.hpp) in our main function to capture frames from the source using the 383`ReadFrame()` function. 384 385The `CvVideoFrameReader` object also tells us information about the input video. Using this information and application 386arguments, we create one of the implementations of the [`IFrameOutput`](./include/IFrameOutput.hpp) interface: 387[`CvVideoFileWriter`](./include/CvVideoFileWriter.hpp) or [`CvWindowOutput`](./include/CvWindowOutput.hpp). 388This object will be used at the end of every loop to write the processed frame to an output video file or gui 389window. 390`CvVideoFileWriter` uses `cv::VideoWriter` with ffmpeg backend. `CvWindowOutput` makes use of `cv::imshow()` function. 391 392See `GetFrameSourceAndSink` function in [Main.cpp](./src/Main.cpp) for more details. 393 394##### Preparing Labels and Model Specific Functions 395In order to interpret the result of running inference on the loaded network, it is required to load the labels 396associated with the model. In the provided example code, the `AssignColourToLabel` function creates a vector of pairs 397label - colour that is ordered according to object class index at the output node of the model. Labels are assigned with 398a randomly generated RGB color. This ensures that each class has a unique color which will prove helpful when plotting 399the bounding boxes of various detected objects in a frame. 400 401Depending on the model being used, `CreatePipeline` function returns specific implementation of the object detection 402pipeline. 403 404 405### There are two ways for Creating the Network. The first is using the Arm NN C++ API, and the second is using 406### Tflite with Arm NN delegate file 407 408#### Creating a Network using the Arm NN C++ API 409 410All operations with Arm NN and networks are encapsulated in 411[`ArmnnNetworkExecutor`](./common/include/ArmnnUtils/ArmnnNetworkExecutor.hpp) class. 412 413##### Creating Parser and Importing Graph 414The first step with Arm NN SDK is to import a graph from file by using the appropriate parser. 415 416The Arm NN SDK provides parsers for reading graphs from a variety of model formats. In our application we specifically 417focus on `.tflite, .pb, .onnx` models. 418 419Based on the extension of the provided model file, the corresponding parser is created and the network file loaded with 420`CreateNetworkFromBinaryFile()` method. The parser will handle the creation of the underlying Arm NN graph. 421 422Current example accepts tflite format model files, we use `ITfLiteParser`: 423```c++ 424#include "armnnTfLiteParser/ITfLiteParser.hpp" 425 426armnnTfLiteParser::ITfLiteParserPtr parser = armnnTfLiteParser::ITfLiteParser::Create(); 427armnn::INetworkPtr network = parser->CreateNetworkFromBinaryFile(modelPath.c_str()); 428``` 429 430##### Optimizing Graph for Compute Device 431Arm NN supports optimized execution on multiple CPU and GPU devices. Prior to executing a graph, we must select the 432appropriate device context. We do this by creating a runtime context with default options with `IRuntime()`. 433 434For example: 435```c++ 436#include "armnn/ArmNN.hpp" 437 438auto runtime = armnn::IRuntime::Create(armnn::IRuntime::CreationOptions()); 439``` 440 441We can optimize the imported graph by specifying a list of backends in order of preference and implement 442backend-specific optimizations. The backends are identified by a string unique to the backend, 443for example `CpuAcc, GpuAcc, CpuRef`. 444 445For example: 446```c++ 447std::vector<armnn::BackendId> backends{"CpuAcc", "GpuAcc", "CpuRef"}; 448``` 449 450Internally and transparently, Arm NN splits the graph into subgraph based on backends, it calls a optimize subgraphs 451function on each of them and, if possible, substitutes the corresponding subgraph in the original graph with 452its optimized version. 453 454Using the `Optimize()` function we optimize the graph for inference and load the optimized network onto the compute 455device with `LoadNetwork()`. This function creates the backend-specific workloads 456for the layers and a backend specific workload factory which is called to create the workloads. 457 458For example: 459```c++ 460armnn::IOptimizedNetworkPtr optNet = Optimize(*network, 461 backends, 462 m_Runtime->GetDeviceSpec(), 463 armnn::OptimizerOptions()); 464std::string errorMessage; 465runtime->LoadNetwork(0, std::move(optNet), errorMessage)); 466std::cerr << errorMessage << std::endl; 467``` 468 469##### Creating Input and Output Binding Information 470Parsers can also be used to extract the input information for the network. By calling `GetSubgraphInputTensorNames` 471we extract all the input names and, with `GetNetworkInputBindingInfo`, bind the input points of the graph. 472For example: 473```c++ 474std::vector<std::string> inputNames = parser->GetSubgraphInputTensorNames(0); 475auto inputBindingInfo = parser->GetNetworkInputBindingInfo(0, inputNames[0]); 476``` 477The input binding information contains all the essential information about the input. It is a tuple consisting of 478integer identifiers for bindable layers (inputs, outputs) and the tensor info (data type, quantization information, 479number of dimensions, total number of elements). 480 481Similarly, we can get the output binding information for an output layer by using the parser to retrieve output 482tensor names and calling `GetNetworkOutputBindingInfo()`. 483 484#### Creating a Network using Tflite and Arm NN delegate file 485 486All operations with Tflite and networks are encapsulated in [`ArmnnNetworkExecutor`](./include/delegate/ArmnnNetworkExecutor.hpp) 487class. 488 489##### Building a Model and creating Interpreter 490The first step with Tflite is to build a model from file by using Flatbuffer model class. 491with that model we create the Tflite Interpreter. 492```c++ 493#include <tensorflow/lite/interpreter.h> 494 495armnnTfLiteParser::ITfLiteParserPtr parser = armnnTfLiteParser::ITfLiteParser::Create();m_model = tflite::FlatBufferModel::BuildFromFile(modelPath.c_str()); 496tflite::ops::builtin::BuiltinOpResolver resolver; 497tflite::InterpreterBuilder(*m_model, resolver)(&m_interpreter); 498``` 499after the Interpreter is created we allocate tensors using the AllocateTensors function of the Interpreter 500```c++ 501m_interpreter->AllocateTensors(); 502``` 503 504##### Creating Arm NN Delegate file 505Arm NN Delegate file is created using the ArmnnDelegate constructor 506The constructor accepts a DelegateOptions object that is created from the 507list of the preferred backends that we want to use, and the optimizerOptions object (optional). 508In this example we enable fast math and reduce all float32 operators to float16 optimizations. 509These optimizations can sometime improve the performance but can also cause degredation, 510depending on the model and the backends involved, therefore one should try it out and 511decide whether to use it or not. 512 513 514```c++ 515#include <armnn_delegate.hpp> 516#include <DelegateOptions.hpp> 517#include <DelegateUtils.hpp> 518 519/* enable fast math optimization */ 520armnn::BackendOptions modelOptionGpu("GpuAcc", {{"FastMathEnabled", true}}); 521optimizerOptions.m_ModelOptions.push_back(modelOptionGpu); 522 523armnn::BackendOptions modelOptionCpu("CpuAcc", {{"FastMathEnabled", true}}); 524optimizerOptions.m_ModelOptions.push_back(modelOptionCpu); 525/* enable reduce float32 to float16 optimization */ 526optimizerOptions.m_ReduceFp32ToFp16 = true; 527 528armnnDelegate::DelegateOptions delegateOptions(preferredBackends, optimizerOptions); 529/* create delegate object */ 530std::unique_ptr<TfLiteDelegate, decltype(&armnnDelegate::TfLiteArmnnDelegateDelete)> 531 theArmnnDelegate(armnnDelegate::TfLiteArmnnDelegateCreate(delegateOptions), 532 armnnDelegate::TfLiteArmnnDelegateDelete); 533``` 534##### Registering the Arm NN delegate file to the Interpreter 535Registering the Arm NN delegate file will provide the TensorFlow Lite interpreter with an alternative implementation 536of the operators that can be accelerated by the Arm hardware 537For example: 538```c++ 539 /* Register the delegate file */ 540 m_interpreter->ModifyGraphWithDelegate(std::move(theArmnnDelegate)); 541``` 542### Object detection pipeline 543 544Generic object detection pipeline has 3 steps, to perform data pre-processing, run inference and decode inference results 545in the post-processing step. 546 547See [`ObjDetectionPipeline`](include/ObjectDetectionPipeline.hpp) and implementations for [`MobileNetSSDv1`](include/ObjectDetectionPipeline.hpp) 548and [`YoloV3Tiny`](include/ObjectDetectionPipeline.hpp) for more details. 549 550#### Pre-processing the Captured Frame 551Each frame captured from source is read as an `cv::Mat` in BGR format but channels are swapped to RGB in a frame reader 552code. 553 554```c++ 555cv::Mat processed; 556... 557objectDetectionPipeline->PreProcessing(frame, processed); 558``` 559 560A pre-processing step consists of resizing the frame to the required resolution, padding and doing data type conversion 561to match the model input layer. 562For example, SSD MobileNet V1 that is used in our example takes for input a tensor with shape `[1, 300, 300, 3]` and 563data type `uint8`. 564 565Pre-processing step returns `cv::Mat` object containing data ready for inference. 566 567#### Executing Inference 568```c++ 569od::InferenceResults results; 570... 571objectDetectionPipeline->Inference(processed, results); 572``` 573Inference step will call `ArmnnNetworkExecutor::Run` method that will prepare input tensors and execute inference. 574We have two separate implementations of the `ArmnnNetworkExecutor` class and its functions including `ArmnnNetworkExecutor::Run` 575The first Implementation [`ArmnnNetworkExecutor`](./common/include/ArmnnUtils/ArmnnNetworkExecutor.hpp)is utilizing 576Arm NN C++ API, 577while the second implementation [`ArmnnNetworkExecutor`](./include/delegate/ArmnnNetworkExecutor.hpp) is utilizing 578Tensorflow lite and its Delegate file mechanism. 579 580##### Executing Inference utilizing the Arm NN C++ API 581A compute device performs inference for the loaded network using the `EnqueueWorkload()` function of the runtime context. 582For example: 583```c++ 584//const void* inputData = ...; 585//outputTensors were pre-allocated before 586 587armnn::InputTensors inputTensors = {{ inputBindingInfo.first,armnn::ConstTensor(inputBindingInfo.second, inputData)}}; 588runtime->EnqueueWorkload(0, inputTensors, outputTensors); 589``` 590We allocate memory for output data once and map it to output tensor objects. After successful inference, we read data 591from the pre-allocated output data buffer. 592See [`ArmnnNetworkExecutor::ArmnnNetworkExecutor`](./common/include/ArmnnUtils/ArmnnNetworkExecutor.hpp) 593and [`ArmnnNetworkExecutor::Run`](./common/include/ArmnnUtils/ArmnnNetworkExecutor.hpp) for more details. 594 595##### Executing Inference utilizing the Tensorflow lite and Arm NN delegate file 596Inside the `PrepareTensors(..)` function, the input frame is copied to the Tflite Interpreter input tensor, 597than the Tflite Interpreter performs inference for the loaded network using the `Invoke()` function. 598For example: 599```c++ 600PrepareTensors(inputData, dataBytes); 601 602if (m_interpreter->Invoke() == kTfLiteOk) 603``` 604After successful inference, we read data from the Tflite Interpreter output tensor and copy 605it to the outResults vector. 606See [`ArmnnNetworkExecutor::Run`](./include/delegate/ArmnnNetworkExecutor.hpp) for more details. 607 608#### Postprocessing 609 610##### Decoding and Processing Inference Output 611The output from inference must be decoded to obtain information about detected objects in the frame. In the examples 612there are implementations for two networks but you may also implement your own network decoding solution here. 613 614For SSD MobileNet V1 models, we decode the results to obtain the bounding box positions, classification index, 615confidence and number of detections in the input frame. 616See [`SSDResultDecoder`](./include/SSDResultDecoder.hpp) for more details. 617 618For YOLO V3 Tiny models, we decode the output and perform non-maximum suppression to filter out any weak detections 619below a confidence threshold and any redundant bounding boxes above an intersection-over-union threshold. 620See [`YoloResultDecoder`](./include/YoloResultDecoder.hpp) for more details. 621 622It is encouraged to experiment with threshold values for confidence and intersection-over-union (IoU) 623to achieve the best visual results. 624 625The detection results are always returned as a vector of [`DetectedObject`](./include/DetectedObject.hpp), 626with the box positions list containing bounding box coordinates in the form `[x_min, y_min, x_max, y_max]`. 627 628#### Drawing Bounding Boxes 629Post-processing step accepts a callback function to be invoked when the decoding is finished. We will use it 630to draw detections on the initial frame. 631With the obtained detections and using [`AddInferenceOutputToFrame`](./src/ImageUtils.cpp) function, we are able to draw bounding boxes around 632detected objects and add the associated label and confidence score. 633```c++ 634//results - inference output 635objectDetectionPipeline->PostProcessing(results, [&frame, &labels](od::DetectedObjects detects) -> void { 636 AddInferenceOutputToFrame(detects, *frame, labels); 637 }); 638``` 639The processed frames are written to a file or displayed in a separate window. 640