Lines Matching +full:multigpu +full:- +full:test
28 An object t is considered a cuda-tensor if:
31 A cuda-tensor provides a tensor description dict:
34 typestr: (str) A numpy-style typestr.
35 data: (int, boolean) A (data_ptr, read-only) tuple.
39 https://numba.pydata.org/numba-doc/latest/cuda/cuda_array_interface.html
99 # typestr from numpy, cuda-native little-endian
149 # float16 comparisons, which aren't supported cpu-side.
195 # Device-status overrides gradient status.
217 @unittest.skipIf(not TEST_MULTIGPU, "No multigpu")
228 # Tensors on non-default device raise api error if converted
241 "Test is temporary disabled, see https://github.com/pytorch/pytorch/issues/54418"
253 https://numba.pydata.org/numba-doc/latest/cuda/cuda_array_interface.html
278 # Zero-copy when using `torch.as_tensor()`
296 # Implicit-copy because `torch_ary` is a CPU array
310 # Explicit-copy when using `torch.tensor()`
330 # This could, in theory, be combined with test_from_cuda_array_interface but that test
350 "Test is temporary disabled, see https://github.com/pytorch/pytorch/issues/54418"
368 "Test is temporary disabled, see https://github.com/pytorch/pytorch/issues/54418"
373 @unittest.skipIf(not TEST_MULTIGPU, "No multigpu")
377 # Zero-copy: both torch/numba default to device 0 and can interop freely
385 # Implicit-copy: when the Numba and Torch device differ