• Home
  • Raw
  • Download

Lines Matching +full:build +full:- +full:docker +full:- +full:rocm

1 ![PyTorch Logo](https://github.com/pytorch/pytorch/raw/main/docs/source/_static/img/pytorch-logo-dark.png)
3 --------------------------------------------------------------------------------
5 PyTorch is a Python package that provides two high-level features:
6 - Tensor computation (like NumPy) with strong GPU acceleration
7 - Deep neural networks built on a tape-based autograd system
13 <!-- toc -->
15 - [More About PyTorch](#more-about-pytorch)
16 - [A GPU-Ready Tensor Library](#a-gpu-ready-tensor-library)
17 - [Dynamic Neural Networks: Tape-Based Autograd](#dynamic-neural-networks-tape-based-autograd)
18 - [Python First](#python-first)
19 - [Imperative Experiences](#imperative-experiences)
20 - [Fast and Lean](#fast-and-lean)
21 - [Extensions Without Pain](#extensions-without-pain)
22 - [Installation](#installation)
23 - [Binaries](#binaries)
24 - [NVIDIA Jetson Platforms](#nvidia-jetson-platforms)
25 - [From Source](#from-source)
26 - [Prerequisites](#prerequisites)
27 - [NVIDIA CUDA Support](#nvidia-cuda-support)
28 - [AMD ROCm Support](#amd-rocm-support)
29 - [Intel GPU Support](#intel-gpu-support)
30 - [Get the PyTorch Source](#get-the-pytorch-source)
31 - [Install Dependencies](#install-dependencies)
32 - [Install PyTorch](#install-pytorch)
33 - [Adjust Build Options (Optional)](#adjust-build-options-optional)
34 - [Docker Image](#docker-image)
35 - [Using pre-built images](#using-pre-built-images)
36 - [Building the image yourself](#building-the-image-yourself)
37 - [Building the Documentation](#building-the-documentation)
38 - [Previous Versions](#previous-versions)
39 - [Getting Started](#getting-started)
40 - [Resources](#resources)
41 - [Communication](#communication)
42 - [Releases and Contributing](#releases-and-contributing)
43 - [The Team](#the-team)
44 - [License](#license)
46 <!-- tocstop -->
55 | ---- | --- |
57 | [**torch.autograd**](https://pytorch.org/docs/stable/autograd.html) | A tape-based automatic differentiation library that supports all differentiable Tensor operations in torch |
65 - A replacement for NumPy to use the power of GPUs.
66 - A deep learning research platform that provides maximum flexibility and speed.
70 ### A GPU-Ready Tensor Library
83 ### Dynamic Neural Networks: Tape-Based Autograd
88 One has to build a neural network and reuse the same structure again and again.
91 With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you to
94 [torch-autograd](https://github.com/twitter/torch-autograd),
107 You can use it naturally like you would use [NumPy](https://www.numpy.org/) / [SciPy](https://www.scipy.org/) / [scikit-learn](https://scikit-learn.org) etc.
140 [or your favorite NumPy-based libraries such as SciPy](https://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html).
143 No wrapper code needs to be written. You can see [a tutorial here](https://pytorch.org/tutorials/advanced/cpp_extension.html) and [an example here](https://github.com/pytorch/extension-cpp).
149 Commands to install binaries via Conda or pip wheels are on our website: [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
154 Python wheels for NVIDIA's Jetson Nano, Jetson TX1/TX2, Jetson Xavier NX/AGX, and Jetson AGX Orin are provided [here](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048) and the L4T container is published [here](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch)
156 They require JetPack 4.2 and above, and [@dusty-nv](https://github.com/dusty-nv) and [@ptrblck](https://github.com/ptrblck) are maintaining them.
163 - Python 3.8 or later (for Linux, Python 3.8.1+ is needed)
164 - A compiler that fully supports C++17, such as clang or gcc (gcc 9.4.0 or newer is required, on Linux)
165 - Visual Studio or Visual Studio Build Tool on Windows
168 Professional, or Community Editions. You can also install the build tools from
169 https://visualstudio.microsoft.com/visual-cpp-build-tools/. The build tools *do not*
172 \* We highly recommend installing an [Anaconda](https://www.anaconda.com/download) environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.
180 $ conda create -y -n <CONDA_NAME>
188 $ conda create -y -n <CONDA_NAME>
190 $ call "C:\Program Files\Microsoft Visual Studio\<VERSION>\Community\VC\Auxiliary\Build\vcvarsall.bat" x64
194 If you want to compile with CUDA support, [select a supported version of CUDA from our support matrix](https://pytorch.org/get-started/locally/), then install the following:
195 - [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads)
196 - [NVIDIA cuDNN](https://developer.nvidia.com/cudnn) v8.5 or above
197 - [Compiler](https://gist.github.com/ax3l/9489132) compatible with CUDA
199 Note: You could refer to the [cuDNN Support Matrix](https://docs.nvidia.com/deeplearning/cudnn/reference/support-matrix.html) for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardware
204 If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are [available here](https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/)
206 ##### AMD ROCm Support
207 If you want to compile with ROCm support, install
208 - [AMD ROCm](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html) 4.0 and above installation
209 - ROCm is currently supported only for Linux systems.
211 If you want to disable ROCm support, export the environment variable `USE_ROCM=0`.
216 - [PyTorch Prerequisites for Intel GPUs](https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpus.html) instructions.
217 - Intel GPU is supported for Linux and Windows.
224 git clone --recursive https://github.com/pytorch/pytorch
228 git submodule update --init --recursive
240 pip install -r requirements.txt
246 pip install mkl-static mkl-include
248 conda install -c pytorch magma-cuda121 # or the magma-cuda* that matches your CUDA version from https://anaconda.org/pytorch/repo
260 pip install mkl-static mkl-include
262 conda install pkg-config libuv
268 pip install mkl-static mkl-include
271 conda install -c conda-forge libuv=1.39
282 Please **note** that starting from PyTorch 2.5, the PyTorch build with XPU supports both new and old C++ ABIs. Previously, XPU only supported the new C++ ABI. If you want to compile with Intel GPU support, please follow [Intel GPU Support](#intel-gpu-support).
284 If you're compiling for AMD ROCm then first run this command:
286 # Only run this if you're compiling for ROCm
292 export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
296 > _Aside:_ If you are using [Anaconda](https://www.anaconda.com/distribution/#download-section), you may experience an error caused by the linker:
299 > build/temp.linux-x86_64-3.7/torch/csrc/stub.o: file not recognized: file format not recognized
314 If you want to build legacy python code, please refer to [Building on legacy code and CUDA](https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md#building-on-legacy-code-and-cuda)
316 **CPU-only builds**
324 Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking `CMAKE_INCLUDE_PATH` and `LIB`. The instruction [here](https://github.com/pytorch/pytorch/blob/main/docs/source/notes/windows.rst#building-from-source) is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.
326 **CUDA based build**
330 [NVTX](https://docs.nvidia.com/gameworks/content/gameworkslibrary/nvtx/nvidia_tools_extension_library_nvtx.htm) is needed to build Pytorch with CUDA.
338 [Magma](https://developer.nvidia.com/magma), [oneDNN, a.k.a. MKLDNN or DNNL](https://github.com/oneapi-src/oneDNN), and [Sccache](https://github.com/mozilla/sccache) are often needed. Please refer to the [installation-helper](https://github.com/pytorch/pytorch/tree/main/.ci/pytorch/win-test-helpers/installation-helpers) to install them.
340 You can refer to the [build_pytorch.bat](https://github.com/pytorch/pytorch/blob/main/.ci/pytorch/win-test-helpers/build_pytorch.bat) script for some other environment variables configurations
357 for /f "usebackq tokens=*" %i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,17^) -products * -latest -property installationPath`) do call "%i\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION%
366 ##### Adjust Build Options (Optional)
369 the following. For example, adjusting the pre-detected directories for CuDNN or BLAS can be done
374 export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
375 python setup.py build --cmake-only
376 ccmake build # or cmake-gui build
381 export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
382 MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build --cmake-only
383 ccmake build # or cmake-gui build
386 ### Docker Image
388 #### Using pre-built images
390 You can also pull a pre-built docker image from Docker Hub and run with docker v19.03+
393 docker run --gpus all --rm -ti --ipc=host pytorch/pytorch:latest
398 should increase shared memory size either with `--ipc=host` or `--shm-size` command line options to `nvidia-docker run`.
402 **NOTE:** Must be built with a docker version > 18.06
404 The `Dockerfile` is supplied to build images with CUDA 11.1 support and cuDNN v8.
409 make -f docker.Makefile
410 # images are tagged as docker.io/${your_docker_username}/pytorch
413 You can also pass the `CMAKE_VARS="..."` environment variable to specify additional CMake variables to be passed to CMake during the build.
417 make -f docker.Makefile
422 To build documentation in various formats, you will need [Sphinx](http://www.sphinx-doc.org) and the
427 pip install -r requirements.txt
429 You can then build the documentation by running `make <format>` from the
433 `npm install -g katex`
440 ```npm install -g katex@0.13.18```
445 on [our website](https://pytorch.org/previous-versions).
450 Three-pointers to get you started:
451 - [Tutorials: get you started with understanding and using PyTorch](https://pytorch.org/tutorials/)
452 - [Examples: easy to understand PyTorch code across all domains](https://github.com/pytorch/examples)
453 - [The API Reference](https://pytorch.org/docs/)
454 - [Glossary](https://github.com/pytorch/pytorch/blob/main/GLOSSARY.md)
462 * [Intro to Deep Learning with PyTorch from Udacity](https://www.udacity.com/course/deep-learning-pytorch--ud188)
463 * [Intro to Machine Learning with PyTorch from Udacity](https://www.udacity.com/course/intro-to-machine-learning-nanodegree--nd229)
464 * [Deep Neural Networks with PyTorch from Coursera](https://www.coursera.org/learn/deep-neural-networks-with-pytorch)
473 * Newsletter: No-noise, a one-way email newsletter with important announcements about PyTorch. You can sign-up here: https://eepurl.com/cbG0rv
481 We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.
490 PyTorch is a community-driven project with several skillful engineers and researchers contributing to it.
493 A non-exhaustive but growing list needs to mention: [Trevor Killeen](https://github.com/killeent), [Sasank Chilamkurthy](https://github.com/chsasank), [Sergey Zagoruyko](https://github.com/szagoruyko), [Adam Lerer](https://github.com/adamlerer), [Francisco Massa](https://github.com/fmassa), [Alykhan Tejani](https://github.com/alykhantejani), [Luca Antiga](https://github.com/lantiga), [Alban Desmaison](https://github.com/albanD), [Andreas Koepf](https://github.com/andreaskoepf), [James Bradbury](https://github.com/jamesb93), [Zeming Lin](https://github.com/ebetica), [Yuandong Tian](https://github.com/yuandong-tian), [Guillaume Lample](https://github.com/glample), [Marat Dukhan](https://github.com/Maratyszcza), [Natalia Gimelshein](https://github.com/ngimel), [Christian Sarofeen](https://github.com/csarofeen), [Martin Raison](https://github.com/martinraison), [Edward Yang](https://github.com/ezyang), [Zachary Devito](https://github.com/zdevito).
499 PyTorch has a BSD-style license, as found in the [LICENSE](LICENSE) file.