Lines Matching +full:nvidia +full:- +full:smi
1 .. _hip-semantics:
6 ROCm\ |trade| is AMD’s open source software platform for GPU-accelerated high
10 projects that require portability between AMD and NVIDIA.
15 ----------------------------------------
21 The example from :ref:`cuda-semantics` will work exactly the same for HIP::
25 cuda2 = torch.device('cuda:2') # GPU 2 (these are 0-indexed)
60 ----------------
77 TensorFloat-32(TF32) on ROCm
78 ----------------------------
82 .. _rocm-memory-management:
85 -----------------
90 ``rocm-smi``. You can use :meth:`~torch.cuda.memory_allocated` and
108 .. _hipfft-plan-cache:
111 ------------------------
115 .. _torch-distributed-backends:
118 --------------------------
122 .. _cuda-api-to_hip-api-mappings:
125 -----------------------------------
151 ---------------------------
153 For any sections not listed here, please refer to the CUDA semantics doc: :ref:`cuda-semantics`
157 -----------------------
164 -DROCM_FORCE_ENABLE_GPU_ASSERTS:BOOL=ON