1PyTorch 2.0 Performance Dashboard 2================================= 3 4**Author:** `Bin Bao <https://github.com/desertfire>`__ and `Huy Do <https://github.com/huydhn>`__ 5 6PyTorch 2.0's performance is tracked nightly on this `dashboard <https://hud.pytorch.org/benchmark/compilers>`__. 7The performance collection runs on 12 GCP A100 nodes every night. Each node contains a 40GB A100 Nvidia GPU and 8a 6-core 2.2GHz Intel Xeon CPU. The corresponding CI workflow file can be found 9`here <https://github.com/pytorch/pytorch/blob/main/.github/workflows/inductor-perf-test-nightly.yml>`__. 10 11How to read the dashboard? 12--------------------------- 13 14The landing page shows tables for all three benchmark suites we measure, ``TorchBench``, ``Huggingface``, and ``TIMM``, 15and graphs for one benchmark suite with the default setting. For example, the default graphs currently show the AMP 16training performance trend in the past 7 days for ``TorchBench``. Droplists on the top of that page can be 17selected to view tables and graphs with different options. In addition to the pass rate, there are 3 key 18performance metrics reported there: ``Geometric mean speedup``, ``Mean compilation time``, and 19``Peak memory footprint compression ratio``. 20Both ``Geometric mean speedup`` and ``Peak memory footprint compression ratio`` are compared against 21the PyTorch eager performance, and the larger the better. Each individual performance number on those tables can be clicked, 22which will bring you to a view with detailed numbers for all the tests in that specific benchmark suite. 23 24What is measured on the dashboard? 25----------------------------------- 26 27All the dashboard tests are defined in this 28`function <https://github.com/pytorch/pytorch/blob/3e18d3958be3dfcc36d3ef3c481f064f98ebeaf6/.ci/pytorch/test.sh#L305>`__. 29The exact test configurations are subject to change, but at the moment, we measure both inference and training 30performance with AMP precision on the three benchmark suites. We also measure different settings of TorchInductor, 31including ``default``, ``with_cudagraphs (default + cudagraphs)``, and ``dynamic (default + dynamic_shapes)``. 32 33Can I check if my PR affects TorchInductor's performance on the dashboard before merging? 34----------------------------------------------------------------------------------------- 35 36Individual dashboard runs can be triggered manually by clicking the ``Run workflow`` button 37`here <https://github.com/pytorch/pytorch/actions/workflows/inductor-perf-test-nightly.yml>`__ 38and submitting with your PR's branch selected. This will kick off a whole dashboard run with your PR's changes. 39Once it is done, you can check the results by selecting the corresponding branch name and commit ID 40on the performance dashboard UI. Be aware that this is an expensive CI run. With the limited 41resources, please use this functionality wisely. 42 43How can I run any performance test locally? 44-------------------------------------------- 45 46The exact command lines used during a complete dashboard run can be found in any recent CI run logs. 47The `workflow page <https://github.com/pytorch/pytorch/actions/workflows/inductor-perf-test-nightly.yml>`__ 48is a good place to look for logs from some of the recent runs. 49In those logs, you can search for lines like 50``python benchmarks/dynamo/huggingface.py --performance --cold-start-latency --inference --amp --backend inductor --disable-cudagraphs --device cuda`` 51and run them locally if you have a GPU working with PyTorch 2.0. 52``python benchmarks/dynamo/huggingface.py -h`` will give you a detailed explanation on options of the benchmarking script. 53