• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1.. _torchdynamo_fine_grain_tracing:
2
3TorchDynamo APIs for fine-grained tracing
4=========================================
5
6.. note:: In this document ``torch.compiler.compile`` and
7   ``torch.compile`` are used interchangeably. Both versions
8   will work in your code.
9
10``torch.compile`` performs TorchDynamo tracing on the whole user model.
11However, it is possible that a small part of the model code cannot be
12handled by ``torch.compiler``. In this case, you might want to disable
13the compiler on that particular portion, while running compilation on
14the rest of the model. This section describe the existing APIs that
15use to define parts of your code in which you want to skip compilation
16and the relevant use cases.
17
18The API that you can use to define portions of the code on which you can
19disable compilation are listed in the following table:
20
21.. csv-table:: TorchDynamo APIs to control fine-grained tracing
22   :header: "API", "Description", "When to use?"
23   :widths: auto
24
25   "``torch.compiler.disable``", "Disables Dynamo on the decorated function as well as recursively invoked functions.", "Excellent for unblocking a user, if a small portion of the model cannot be handled with ``torch.compile``."
26   "``torch._dynamo.disallow_in_graph``", "Disallows the marked op in the TorchDynamo graph. TorchDynamo causes graph break, and runs the op in the eager (no compile) mode.\n\nThis is suitable for the ops, while ``torch.compiler.disable`` is suitable for decorating functions.", "This API is excellent for both debugging and unblocking if a custom op like ``torch.ops.fbgemm.*`` is causing issues with the ``torch.compile`` function."
27   "``torch.compile.allow_in_graph``", "The annotated callable goes as is in the TorchDynamo graph. For example, a black-box for TorchDynamo Dynamo.\n\nNote that AOT Autograd will trace through it, so the ``allow_in_graph`` is only a Dynamo-level concept.", "This API is useful for portions of the model which have known TorchDynamo hard-to-support features, like hooks or ``autograd.Function``. However, each usage of ``allow_in_graph`` **must be carefully screened** (no graph breaks, no closures)."
28   "``torch._dynamo.graph_break``", "Adds a graph break. The code before and after the graph break goes through TorchDynamo.", "**Rarely useful for deployment** - If you think you need this, most probably you need either ``disable`` or ``disallow_in_graph``."
29   "``torch.compiler.is_compiling``", "Indicates whether a graph is executed/traced as part of torch.compile() or torch.export()."
30   "``torch.compiler.is_dynamo_compiling``", "Indicates whether a graph is traced via TorchDynamo. It's stricter than torch.compiler.is_compiling() flag, as it would only be set to True when TorchDynamo is used."
31
32``torch.compiler.disable``
33~~~~~~~~~~~~~~~~~~~~~~~~~~
34
35``torch.compiler.disable`` disables compilation on the decorated function frame and all the function frames recursively invoked from the decorated function frame.
36
37TorchDynamo intercepts the execution of each Python function frame. So, suppose you have a code structure (image below) where the function ``fn`` calls functions ``a_fn`` and ``b_fn``. And ``a_fn`` calls ``aa_fn`` and ``ab_fn``. When you use the PyTorch eager mode rather than ``torch.compile``, these function frames run as is. With ``torch.compile``, TorchDynamo intercepts each of these function frames (indicated by the green color):
38
39.. figure:: _static/img/fine_grained_apis/api_diagram.png
40   :alt: Callstack diagram of different apis.
41
42Let's imagine, that function ``a_fn`` is causing troubles with ``torch.compile``.
43And this is a non-critical portion of the model. You can use ``compiler.disable``
44on function ``a_fn``. As shown above, TorchDynamo will stop looking at frames
45originating from the ``a_fn`` call (white color indicates original Python behavior).
46
47To skip compilation, you can decorate the offending function with
48``@torch.compiler.disable``.
49
50You can also use the non-decorator syntax if you don’t want to change the source
51code
52However, we recommend that you avoid this style if possible. Here, you have to
53take care that all users of the original function are now using the patched
54version.
55
56``torch._dynamo.disallow_in_graph``
57~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
58
59``torch._dynamo.disallow_in_graph`` disallows an operator but not the function
60to be present in the TorchDynamo extracted graph. Note that this is suitable
61for operators and not general functions as in the case of ``_dynamo.disable``.
62
63Let's imagine you compile your model with PyTorch. TorchDynamo is able to
64extract a graph, but then you see the downstream compiler failing. For example,
65the meta kernel is missing, or some Autograd dispatch key is set incorrectly
66for a particular operator. Then you can mark that operator as
67``disallow_in_graph``, and TorchDynamo will cause a graph break and run that
68operator by using the PyTorch eager mode.
69
70The catch is that you will have to find the corresponding Dynamo level operator,
71and not the ATen level operator. See more in the Limitations section of the doc.
72
73.. warning::
74   ``torch._dynamo.disallow_in_graph`` is a global flag. If you are comparing
75   different backend compilers, you might have to call ``allow_in_graph`` for
76   the disallowed operator when switching to the other compiler.
77
78``torch.compiler.allow_in_graph``
79~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
80
81``torch.compiler.allow_in_graph`` is useful when the relevant function frame
82has some known hard-to-support TorchDynamo feature, such as hooks and
83``autograd.Function``, and you are confident that downstream PyTorch components
84such as AOTAutograd can safely trace through the decorated function. When a
85function is decorated with ``allow_in_graph``, TorchDynamo treats it as a
86black-box and puts it as is in the generated graph.
87
88.. warning::
89   ``allow_in_graph`` skips TorchDynamo completely on the decorated function
90   omitting all TorchDynamo safety checks, including graph breaks, handling
91   closures, and others. Use `allow_in_graph` with caution. PyTorch downstream
92   components, such as AOTAutograd rely on TorchDynamo to handle complex Python
93   features, but ``allow_in_graph`` bypasses TorchDynamo. Using ``allow_in_graph``
94   could lead to soundness and hard-to-debug issues.
95
96Limitations
97~~~~~~~~~~~
98
99All the existing APIs are applied at the TorchDynamo level. Therefore, these
100APIs have visibility to only what TorchDynamo sees. This can lead to confusing
101scenarios.
102
103For example, ``torch._dynamo.disallow_in_graph`` will not work for ATen operators
104because they are visible to AOT Autograd. For example,
105``torch._dynamo.disallow_in_graph(torch.ops.aten.add)`` will not work in the
106above example.
107