Lines Matching full:xnnpack
1 # Building and Running ExecuTorch with XNNPACK Backend
3 …XNNPACK Delegate for accelerating your ML Models using CPU hardware. It will go over exporting and…
9 In this tutorial, you will learn how to export an XNNPACK lowered Model and run it on a target plat…
15 * [ExecuTorch XNNPACK Delegate](./native-delegates-executorch-xnnpack-delegate.md)
20 ## Lowering a model to XNNPACK
27 from executorch.backends.xnnpack.partition.xnnpack_partitioner import XnnpackPartitioner
42 …XNNPACK backend delegate to consume. Afterwards, the identified subgraphs will be serialized with …
65 …XNNPACK Delegate. The subgraphs which are being delegated to XNNPACK are the first argument at eac…
73 After lowering to the XNNPACK Program, we can then prepare it for executorch and save the model as …
76 ## Lowering a Quantized Model to XNNPACK
77 The XNNPACK delegate can also execute symmetrically quantized models. To understand the quantizatio…
113 …e XNNPACK delegate to lower the quantized exported model graph. From here, the procedure is the sa…
133 python -m examples.xnnpack.aot_compiler --model_name="mv2" --quantize --delegate
139 * the `-—delegate` flag controls whether we attempt to lower parts of the graph to the XNNPACK dele…
143 ## Running the XNNPACK Model with CMake
144 …XNNPACK Delegated model, we can now try running it with example inputs using CMake. We can build a…
171 Now you should be able to find the executable built at `./cmake-out/backends/xnnpack/xnn_executor_r…
173 ./cmake-out/backends/xnnpack/xnn_executor_runner --model_path=./mv2_xnnpack_fp32.pte
175 ./cmake-out/backends/xnnpack/xnn_executor_runner --model_path=./mv2_xnnpack_q8.pte
178 ## Building and Linking with the XNNPACK Backend
179 You can build the XNNPACK backend [CMake target](https://github.com/pytorch/executorch/blob/main/ba…