• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Debugging Models in ExecuTorch
2
3With the ExecuTorch Developer Tools, users can debug their models for numerical inaccurcies and extract model outputs from their device to do quality analysis (such as Signal-to-Noise, Mean square error etc.).
4
5Currently, ExecuTorch supports the following debugging flows:
6- Extraction of model level outputs via ETDump.
7- Extraction of intermediate outputs (outside of delegates) via ETDump:
8  - Linking of these intermediate outputs back to the eager model python code.
9
10
11## Steps to debug a model in ExecuTorch
12
13### Runtime
14For a real example reflecting the steps below, please refer to [example_runner.cpp](https://github.com/pytorch/executorch/blob/main/examples/devtools/example_runner/example_runner.cpp).
15
161. [Optional] Generate an [ETRecord](./etrecord.rst) while exporting your model. When provided, this enables users to link profiling information back to the eager model source code (with stack traces and module hierarchy).
172. Integrate [ETDump generation](./etdump.md) into the runtime and set the debugging level by configuring the `ETDumpGen` object. Then, provide an additional buffer to which intermediate outputs and program outputs will be written. Currently we support two levels of debugging:
18    - Program level outputs
19    ```C++
20    Span<uint8_t> buffer((uint8_t*)debug_buffer, debug_buffer_size);
21    etdump_gen.set_debug_buffer(buffer);
22    etdump_gen.set_event_tracer_debug_level(
23        EventTracerDebugLogLevel::kProgramOutputs);
24    ```
25
26    - Intermediate outputs of executed (non-delegated) operations (will include the program level outputs too)
27    ```C++
28    Span<uint8_t> buffer((uint8_t*)debug_buffer, debug_buffer_size);
29    etdump_gen.set_debug_buffer(buffer);
30    etdump_gen.set_event_tracer_debug_level(
31        EventTracerDebugLogLevel::kIntermediateOutputs);
32    ```
333. Build the runtime with the pre-processor flag that enables tracking of debug events. Instructions are in the [ETDump documentation](./etdump.md).
344. Run your model and dump out the ETDump buffer as described [here](./etdump.md). (Do so similarly for the debug buffer if configured above)
35
36
37### Accessing the debug outputs post run using the Inspector API's
38Once a model has been run, using the generated ETDump and debug buffers, users can leverage the [Inspector API's](./model-inspector.rst) to inspect these debug outputs.
39
40```python
41from executorch.devtools import Inspector
42
43# Create an Inspector instance with etdump and the debug buffer.
44inspector = Inspector(etdump_path=etdump_path,
45            buffer_path = buffer_path,
46            # etrecord is optional, if provided it'll link back
47            # the runtime events to the eager model python source code.
48            etrecord = etrecord_path)
49
50# Accessing program outputs is as simple as this:
51for event_block in inspector.event_blocks:
52    if event_block.name == "Execute":
53        print(event_blocks.run_output)
54
55# Accessing intermediate outputs from each event (an event here is essentially an instruction that executed in the runtime).
56for event_block in inspector.event_blocks:
57    if event_block.name == "Execute":
58        for event in event_block.events:
59            print(event.debug_data)
60            # If an ETRecord was provided by the user during Inspector initialization, users
61            # can print the stacktraces and module hierarchy of these events.
62            print(event.stack_traces)
63            print(event.module_hierarchy)
64```
65
66We've also provided a simple set of utilities that let users perform quality analysis of their model outputs with respect to a set of reference outputs (possibly from the eager mode model).
67
68
69```python
70from executorch.devtools.inspector import compare_results
71
72# Run a simple quality analysis between the model outputs sourced from the
73# runtime and a set of reference outputs.
74#
75# Setting plot to True will result in the quality metrics being graphed
76# and displayed (when run from a notebook) and will be written out to the
77# filesystem. A dictionary will always be returned which will contain the
78# results.
79for event_block in inspector.event_blocks:
80    if event_block.name == "Execute":
81        compare_results(event_blocks.run_output, ref_outputs, plot = True)
82```
83