• Home
  • Raw
  • Download

Lines Matching +full:pre +full:- +full:commit +full:- +full:config

3-Ultimate-Guide-to-PyTorch-Contributions), specifically the [Submitting a Change](https://github.c…
11 <!-- toc -->
13 - [Developing PyTorch](#developing-pytorch)
14 - [Setup the development environment](#setup-the-development-environment)
15 - [Tips and Debugging](#tips-and-debugging)
16 - [Nightly Checkout & Pull](#nightly-checkout--pull)
17 - [Codebase structure](#codebase-structure)
18 - [Unit testing](#unit-testing)
19 - [Python Unit Testing](#python-unit-testing)
20 - [Better local unit tests with `pytest`](#better-local-unit-tests-with-pytest)
21 - [Local linting](#local-linting)
22 - [Running `mypy`](#running-mypy)
23 - [C++ Unit Testing](#c-unit-testing)
24 - [Run Specific CI Jobs](#run-specific-ci-jobs)
25 - [Merging your Change](#merging-your-change)
26 - [Writing documentation](#writing-documentation)
27 - [Docstring type formatting](#docstring-type-formatting)
28 - [Building documentation](#building-documentation)
29 - [Tips](#tips)
30 - [Building C++ Documentation](#building-c-documentation)
31 - [Previewing changes locally](#previewing-changes-locally)
32 - [Previewing documentation on PRs](#previewing-documentation-on-prs)
33 - [Adding documentation tests](#adding-documentation-tests)
34 - [Profiling with `py-spy`](#profiling-with-py-spy)
35 - [Managing multiple build trees](#managing-multiple-build-trees)
36 - [C++ development tips](#c-development-tips)
37 - [Build only what you need](#build-only-what-you-need)
38 - [Code completion and IDE support](#code-completion-and-ide-support)
39 - [Make no-op build fast](#make-no-op-build-fast)
40 - [Use Ninja](#use-ninja)
41 - [Use CCache](#use-ccache)
42 - [Use a faster linker](#use-a-faster-linker)
43 - [Use pre-compiled headers](#use-pre-compiled-headers)
44 - [Workaround for header dependency bug in nvcc](#workaround-for-header-dependency-bug-in-nvcc)
45 - [Rebuild few files with debug information](#rebuild-few-files-with-debug-information)
46 - [C++ frontend development tips](#c-frontend-development-tips)
47 - [GDB integration](#gdb-integration)
48 - [C++ stacktraces](#c-stacktraces)
49 - [CUDA development tips](#cuda-development-tips)
50 - [Windows development tips](#windows-development-tips)
51 - [Known MSVC (and MSVC with NVCC) bugs](#known-msvc-and-msvc-with-nvcc-bugs)
52 - [Building on legacy code and CUDA](#building-on-legacy-code-and-cuda)
53 - [Running clang-tidy](#running-clang-tidy)
54 - [Pre-commit tidy/linting hook](#pre-commit-tidylinting-hook)
55 - [Building PyTorch with ASAN](#building-pytorch-with-asan)
56 - [Getting `ccache` to work](#getting-ccache-to-work)
57- [Why this stuff with `LD_PRELOAD` and `LIBASAN_RT`?](#why-this-stuff-with-ld_preload-and-libasan…
58 - [Why LD_PRELOAD in the build function?](#why-ld_preload-in-the-build-function)
59 - [Why no leak detection?](#why-no-leak-detection)
60 - [Caffe2 notes](#caffe2-notes)
61 - [CI failure tips](#ci-failure-tips)
62 - [Which commit is used in CI?](#which-commit-is-used-in-ci)
63 - [Dev Infra Office Hours](#dev-infra-office-hours)
65 <!-- tocstop -->
69 …pytorch/pytorch#from-source). If you get stuck when developing PyTorch on your machine, check out …
73 …itHub with SSH](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) to setup …
82 make setup-env # or make setup-env-cuda for pre-built CUDA binaries
83 conda activate pytorch-deps
88 * If you want to have no-op incremental rebuilds (which are fast), see [Make no-op build fast](#mak…
91-tree when importing `torch` package. (This is done by creating [`.egg-link`](https://wiki.python.…
94 non-Python files (`.cpp`, `.cc`, `.cu`, `.h`, ...).
100 pushd torch/lib; sh -c "ln -sf ../../build/lib/libtorch_cpu.* ."; popd
114 conda uninstall pytorch -y
121 …1. Run `printf '#include <stdio.h>\nint main() { printf("Hello World");}'|clang -x c -; ./a.out` t…
125 `rm -rf build` from the toplevel `pytorch` directory and start over.
126 …3. If you have made edits to the PyTorch repo, commit any change you'd like to keep and clean the …
129 git submodule deinit -f .
130 git clean -xdf
132 git submodule update --init --recursive # very important to sync the submodules
141 * If you run into issue running `git submodule update --init --recursive`. Please try the following:
142 - If you encounter an error such as
146 …check whether your Git local or global config file contains any `submodule.*` settings. If yes, re…
147 …(please reference [this doc](https://git-scm.com/docs/git-config#Documentation/git-config.txt-subm…
149 - If you encounter an error such as
154 …`git config --global --list` and search for config like `http.proxysslcert=<cert_file>`. Then chec…
156 openssl x509 -noout -in <cert_file> -dates
159 - If you encounter an error that some third_party modules are not checked out correctly, such as
163 …remove any `submodule.*` settings in your local git config (`.git/config` of your pytorch repo) an…
164 …ces](https://github.com/pytorch/pytorch/wiki/Best-Practices-to-Edit-and-Compile-Pytorch-Source-Cod…
165 …om office hours! See details [here](https://github.com/pytorch/pytorch/wiki/Dev-Infra-Office-Hours)
171 version of PyTorch and installs pre-built binaries into the current repository.
178 ./tools/nightly.py checkout -b my-nightly-branch
179 conda activate pytorch-deps
182 Or if you would like to re-use an existing conda environment, you can pass in
183 the regular environment parameters (`--name` or `--prefix`):
186 ./tools/nightly.py checkout -b my-nightly-branch -n my-env
187 conda activate my-env
190 To install the nightly binaries built with CUDA, you can pass in the flag `--cuda`:
193 ./tools/nightly.py checkout -b my-nightly-branch --cuda
194 conda activate pytorch-deps
200 ./tools/nightly.py pull -n my-env
201 conda activate my-env
209 * [c10](c10) - Core library files that work everywhere, both server
215 * [aten](aten) - C++ tensor library for PyTorch (no autograd support)
216 * [src](aten/src) - [README](aten/src/README.md)
218 * [core](aten/src/ATen/core) - Core functionality of ATen. This
219 is migrating to top-level c10 folder.
220 * [native](aten/src/ATen/native) - Modern implementations of
225 * [cpu](aten/src/ATen/native/cpu) - Not actually CPU
227 which are compiled with processor-specific instructions, like
230 * [cuda](aten/src/ATen/native/cuda) - CUDA implementations of
232 * [sparse](aten/src/ATen/native/sparse) - CPU and CUDA
236 - implementations of operators which simply bind to some
238 …* [quantized](aten/src/ATen/native/quantized/) - Quantized tensor (i.e. QTensor) operation impleme…
239 * [torch](torch) - The actual PyTorch library. Everything that is not
242 * [csrc](torch/csrc) - C++ files composing the PyTorch library. Files
247 * [jit](torch/csrc/jit) - Compiler and frontend for TorchScript JIT
249 …* [autograd](torch/csrc/autograd) - Implementation of reverse-mode automatic differentiation. [REA…
250 * [api](torch/csrc/api) - The PyTorch C++ frontend.
251 * [distributed](torch/csrc/distributed) - Distributed training
253 * [tools](tools) - Code generation scripts for the PyTorch library.
255 * [test](test) - Python unit tests for PyTorch Python frontend.
256 * [test_torch.py](test/test_torch.py) - Basic tests for PyTorch
258 * [test_autograd.py](test/test_autograd.py) - Tests for non-NN
260 * [test_nn.py](test/test_nn.py) - Tests for NN operators and
262 * [test_jit.py](test/test_jit.py) - Tests for the JIT compiler
265 * [cpp](test/cpp) - C++ unit tests for PyTorch C++ frontend.
266 * [api](test/cpp/api) - [README](test/cpp/api/README.md)
267 * [jit](test/cpp/jit) - [README](test/cpp/jit/README.md)
268 * [tensorexpr](test/cpp/tensorexpr) - [README](test/cpp/tensorexpr/README.md)
269 * [expect](test/expect) - Automatically generated "expect" files
271 * [onnx](test/onnx) - Tests for ONNX export functionality,
273 * [caffe2](caffe2) - The Caffe2 library.
274 * [core](caffe2/core) - Core files of Caffe2, e.g., tensor, workspace,
276 * [operators](caffe2/operators) - Operators of Caffe2.
277 * [python](caffe2/python) - Python bindings to Caffe2.
279 * [.circleci](.circleci) - CircleCI configuration management. [README](.circleci/README.md)
287 - `expecttest` and `hypothesis` - required to run tests
288 - `mypy` - recommended for linting
289 - `pytest` - recommended to run tests more selectively
323 …uch, there may be some inconsistencies between local testing and CI testing--if you observe an inc…
332 use the `-k` flag:
335 pytest test/test_nn.py -k Loss -v
347 make setup-lint
364 PyTorch](https://github.com/pytorch/pytorch/wiki/Guide-for-adding-type-annotations-to-PyTorch)
374 is `./build/bin/FILENAME --gtest_filter=TESTSUITE.TESTNAME`, where
383 ./build/bin/test_jit --gtest_filter=ContainerAliasingTest.MayContainAlias
389 You can generate a commit that limits the CI to only run a specific job by using
393 # --job: specify one or more times to filter to a specific job + its dependencies
394 # --filter-gha: specify github actions workflows to keep
395 # --make-commit: commit CI changes to git with a message explaining the change
396 …/explicit_ci_jobs.py --job binary_linux_manywheel_3_6m_cpu_devtoolset7_nightly_test --filter-gha '…
404 [`ghstack`](https://github.com/ezyang/ghstack). It creates a large commit that is
414 …me see us during [our office hours](https://github.com/pytorch/pytorch/wiki/Dev-Infra-Office-Hours)
416 …`@pytorchmergebot merge` ([what's this bot?](https://github.com/pytorch/pytorch/wiki/Bot-commands))
422 - **User facing documentation**:
424 - **Developer facing documentation**:
427 …s [page on the wiki](https://github.com/pytorch/pytorch/wiki/Where-or-how-should-I-add-documentati…
429 The rest of this section is about user-facing documentation.
431 PyTorch uses [Google style](https://www.sphinx-doc.org/en/master/usage/extensions/example_google.ht…
445 * The only acceptable delimiter words for types are `or` and `of`. No other non-type words should b…
453 * Basic Python types should match their type name so that the [Intersphinx](https://www.sphinx-doc.…
482 pip install -r requirements.txt
485 # npm install -g katex
494 ```npm install -g katex@0.13.18```
521 find . -type f | grep rst | grep -v index | grep -v jit | xargs rm
525 # Don't commit the deletions!
534 [Sphinx](http://www.sphinx-doc.org/) via
541 commands. To run this check locally, run `./check-doxygen.sh` from inside
559 ssh my_machine -L 8000:my_machine:8000
562 et my_machine -t="8000:8000"
571 python -m http.server 8000 <path_to_html_output>
578 mkdir -p build cpp/build
579 rsync -az me@my_machine:/path/to/pytorch/docs/build/html build
580 rsync -az me@my_machine:/path/to/pytorch/docs/cpp/build/html cpp/build
585 PyTorch will host documentation previews at `https://docs-preview.pytorch.org/pytorch/pytorch/<pr n…
592 build includes the [Sphinx Doctest Extension](https://www.sphinx-doc.org/en/master/usage/extensions…
603 ## Profiling with `py-spy`
608 [`py-spy`](https://github.com/benfred/py-spy), a sampling profiler for Python
611 `py-spy` can be installed via `pip`:
614 pip install py-spy
617 To use `py-spy`, first write a Python test script that exercises the
633 `py-spy` with such a script is to generate a [flame
637 py-spy record -o profile.svg --native -- python test_tensor_tensor_add.py
643 the program execution timeline. The `--native` command-line option tells
644 `py-spy` to record stack frame entries for PyTorch C++ code. To get line numbers
647 your operating system it may also be necessary to run `py-spy` with root
650 `py-spy` can also work in an `htop`-like "live profiling" mode and can be
651 tweaked to adjust the stack sampling rate, see the `py-spy` readme for more
667 conda create -n pytorch-myfeature
668 source activate pytorch-myfeature
686 - Working on a test binary? Run `(cd build && ninja bin/test_binary_name)` to
693 - `DEBUG=1` will enable debug builds (-g -O0)
694 - `REL_WITH_DEB_INFO=1` will enable debug symbols with optimizations (-g -O3)
695 - `USE_DISTRIBUTED=0` will disable distributed (c10d, gloo, mpi, etc.) build.
696 - `USE_MKLDNN=0` will disable using MKL-DNN.
697 - `USE_CUDA=0` will disable compiling CUDA (in case you are developing on something not CUDA relate…
698 - `BUILD_TEST=0` will disable building C++ test binaries.
699 - `USE_FBGEMM=0` will disable using FBGEMM (quantized 8-bit server operators).
700 - `USE_NNPACK=0` will disable compiling with NNPACK.
701 - `USE_QNNPACK=0` will disable QNNPACK build (quantized 8-bit operators).
702 - `USE_XNNPACK=0` will disable compiling with XNNPACK.
703 - `USE_FLASH_ATTENTION=0` and `USE_MEM_EFF_ATTENTION=0` will disable compiling flash attention and …
713 `cmake-gui build/`, or directly edit `build/CMakeCache.txt` to adapt build
723 - https://sarcasm.github.io/notes/dev/compilation-database.html
725 ### Make no-op build fast
741 same. Using ccache in a situation like this is a real time-saver.
746 conda install ccache -c conda-forge
756 # config: cache dir is ~/.ccache, conf file ~/.ccache/ccache.conf
758 ccache -M 25Gi # -M 0 for unlimited
760 ccache -F 0
789 …llow [this guide](https://stackoverflow.com/questions/42730345/how-to-install-llvm-for-mac) instea…
795 ln -s /path/to/downloaded/ld.lld /usr/local/bin/ld
798 #### Use pre-compiled headers
802 you're using CMake newer than 3.16, you can enable pre-compiled headers by
816 - internal functions can never alias existing names in `<ATen/ATen.h>`
817 - names in `<ATen/ATen.h>` will work even if you don't explicitly include it.
820 If re-building without modifying any files results in several CUDA files being
821 re-compiled, you may be running into an `nvcc` bug where header dependencies are
826 A compiler-wrapper to fix this is provided in `tools/nvcc_fix_deps.py`. You can use
841 % lldb -o "b applySelect" -o "process launch" -- python3 -c "import torch;print(torch.rand(5)[3])"
844 (lldb) settings set -- target.run-args "-c" "import torch;print(torch.rand(5)[3])"
851 * thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
854 -> 0x1023d55a8 <+0>: sub sp, sp, #0xd0
869 % lldb -o "b applySelect" -o "process launch" -- python3 -c "import torch;print(torch.rand(5)[3])"
872 (lldb) settings set -- target.run-args "-c" "import torch;print(torch.rand(5)[3])"
879 * thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
884 -> 239 if (self_sizes.has_value()) {
906 [pytorch-gdb](tools/gdb/pytorch-gdb.py). This script introduces some
907 pytorch-specific commands which you can use from the GDB prompt. In
908 particular, `torch-tensor-repr` prints a human-readable repr of an at::Tensor
936 (gdb) torch-tensor-repr *this
937 Python-level repr of *this:
941 GDB tries to automatically load `pytorch-gdb` thanks to the
942 [.gdbinit](.gdbinit) at the root of the pytorch repo. However, auto-loadings is disabled by default…
946 …"/path/to/pytorch/.gdbinit" auto-loading has been declined by your `auto-load safe-path' set to "$…
948 add-auto-load-safe-path /path/to/pytorch/.gdbinit
949 line to your configuration file "/home/YOUR-USERNAME/.gdbinit".
951 set auto-load safe-path /
952 line to your configuration file "/home/YOUR-USERNAME/.gdbinit".
954 "Auto-loading safe path" section in the GDB manual. E.g., run from the shell:
955 info "(gdb)Auto-loading safe path"
959 As gdb itself suggests, the best way to enable auto-loading of `pytorch-gdb`
964 add-auto-load-safe-path /path/to/pytorch/.gdbinit
974 1. `CUDA_DEVICE_DEBUG=1` will enable CUDA device function debug symbols (`-g -G`).
977 2. `cuda-gdb` and `cuda-memcheck` are your best CUDA debugging friends. Unlike`gdb`,
978 `cuda-gdb` can display actual values in a CUDA tensor (rather than all zeros).
981 …[--expt-relaxed-constexpr](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#conste…
982 nvcc flag. There is a known [issue](https://github.com/ROCm-Developer-Tools/HIP/issues/374)
985 …[Effective Memory Bandwidth](https://devblogs.nvidia.com/how-implement-performance-metrics-cuda-cc…
1004 …, size, elements", size, "forward", timec, "bandwidth (GB/s)", size*(nbytes_read_write)*1e-9/timec)
1008 See more cuda development tips [here](https://github.com/pytorch/pytorch/wiki/CUDA-basics)
1039 you want to run the build, the easiest way is to just run `.ci/pytorch/win-build.sh`.
1040 If you need to rebuild, run `REBUILD=1 .ci/pytorch/win-build.sh` (this will avoid
1064 cmake --build .
1080 that does a use-after-free or stack overflows; on Linux the code
1128 | ------------ | ------------------------------------------------------- | --------------- |
1133 Note: There's a [compilation issue](https://github.com/oneapi-src/oneDNN/issues/812) in several Vis…
1135 ## Running clang-tidy
1137 [Clang-Tidy](https://clang.llvm.org/extra/clang-tidy/index.html) is a C++
1138 linter and static analysis tool based on the clang compiler. We run clang-tidy
1140 [`clang-tidy` job in our GitHub Workflow's
1144 To run clang-tidy locally, follow these steps:
1146 1. Install clang-tidy.
1149 python3 -m tools.linter.clang_tidy.generate_build_files
1153 2. Install clang-tidy driver script dependencies
1155 pip3 install -r tools/linter/clang_tidy/requirements.txt
1158 3. Run clang-tidy
1160 # Run clang-tidy on the entire codebase
1161 make clang-tidy
1162 # Run clang-tidy only on your changes
1163 make clang-tidy CHANGED_ONLY=--changed-only
1165 This internally invokes our driver script and closely mimics how clang-tidy is run on CI.
1167 ## Pre-commit tidy/linting hook
1169 We use clang-tidy to perform additional
1170 formatting and semantic checking of code. We provide a pre-commit git hook for
1171 performing these checks, before a commit is created:
1174 ln -s ../../tools/git-pre-commit .git/hooks/pre-commit
1181 flake8 $(git diff --name-only $(git merge-base --fork-point main))
1185 [Lint as you type](https://github.com/pytorch/pytorch/wiki/Lint-as-you-type)
1188 Fix the code so that no errors are reported when you re-run the above check again,
1189 and then commit the fix.
1207 LIBASAN_RT="$LLVM_ROOT/lib/clang/8.0.0/lib/linux/libclang_rt.asan-x86_64.so"
1213 LDSHARED="clang --shared" \
1214 LDFLAGS="-stdlib=libstdc++" \
1215 CFLAGS="-fsanitize=address -fno-sanitize-recover=all -shared-libasan -pthread" \
1216 CXX_FLAGS="-pthread" \
1226 # you can look at build-asan.sh to find the latest options the CI uses
1229 export ASAN_SYMBOLIZER_PATH=$LLVM_ROOT/bin/llvm-symbolizer
1235 suo-devfair ~/pytorch ❯ build_with_asan
1236 suo-devfair ~/pytorch ❯ run_with_asan python test/test_jit.py
1254 1. Recompile your binary with `-fsanitize=address`.
1258 a third-party executable (Python). It’s too much of a hassle to recompile all
1262 1. Recompile your library with `-fsanitize=address -shared-libasan`. The
1263 extra `-shared-libasan` tells the compiler to ask for the shared ASAN
1294 - `CMakeLists.txt`, `Makefile`, `binaries`, `cmake`, `conda`, `modules`,
1295 `scripts` are Caffe2-specific. Don't put PyTorch code in them without
1298 - `mypy*`, `requirements.txt`, `setup.py`, `test`, `tools` are
1299 PyTorch-specific. Don't put Caffe2 code in them without extra
1304 Once you submit a PR or push a new commit to a branch that is in
1312 subsection](#which-commit-is-used-in-ci) for more details.
1317 our [CI wiki](https://github.com/pytorch/pytorch/wiki/Debugging-using-with-ssh-for-Github-Actions).
1320 ### Which commit is used in CI?
1323 commit, and CI is run on that commit (there isn't really any other choice).
1325 For PRs, however, it's a bit more complicated. Consider this commit graph, where
1326 `main` is at commit `A`, and the branch for PR #42 (just a placeholder) is at
1327 commit `B`:
1330 o---o---B (refs/pull/42/head)
1334 ---o---o---o---A (merge-destination) - usually main
1337 There are two possible choices for which commit to use:
1339 1. Checkout commit `B`, the head of the PR (manually committed by the PR
1341 2. Checkout commit `C`, the hypothetical result of what would happen if the PR
1344 For all practical purposes, most people can think of the commit being used as
1345 commit `B` (choice **1**).
1352 PR's commit (commit `B`). Please note, this scenario would never affect PRs authored by `ghstack` a…
1356 [Dev Infra Office Hours](https://github.com/pytorch/pytorch/wiki/Dev-Infra-Office-Hours) are hosted…