Lines Matching +full:nvidia +full:- +full:smi
5 -------------------------------------------------------
20 Sometimes, it can be non-obvious when differentiable variables can
22 <https://discuss.pytorch.org/t/high-memory-usage-while-training/162>`_):
24 .. code-block:: python
40 `1 <https://discuss.pytorch.org/t/resolved-gpu-out-of-memory-error-with-batch-size-1/3719>`_.
53 .. code-block:: python
75 `this forum post <https://discuss.pytorch.org/t/help-clarifying-repackage-hidden-in-word-language-m…
86 You can trade-off memory for compute by using `checkpoint <https://pytorch.org/docs/stable/checkpoi…
89 ----------------------------------
91 result, the values shown in ``nvidia-smi`` usually don't reflect the true
92 memory usage. See :ref:`cuda-memory-management` for more details about GPU
97 ``ps -elf | grep python`` and manually kill them with ``kill -9 [pid]``.
100 --------------------------------------------------------
103 .. code-block:: python
117 .. code-block:: python
130 .. _dataloader-workers-random-seed:
133 -------------------------------------------------------
139 .. _pack-rnn-unpack-with-data-parallelism:
142 -------------------------------------------------------
144 ``pack sequence -> recurrent network -> unpack sequence`` pattern in a