Searched +full:multigpu +full:- +full:test (Results 1 – 12 of 12) sorted by relevance
1 # TODO: this looks sort of similar to _linux-test, but there are like a dozen5 name: test10 build-environment:13 description: Top-level label for what's being built/tested.14 test-matrix:17 description: JSON description of what test configs to run.18 docker-image:22 sync-tag:28 job with the same `sync-tag` is identical.29 timeout-minutes:[all …]
1 name: linux-test6 build-environment:9 description: Top-level label for what's being built/tested.10 test-matrix:13 description: JSON description of what test configs to run.14 docker-image:18 sync-tag:24 job with the same `sync-tag` is identical.25 timeout-minutes:31 use-gha:[all …]
7 - cron: 45 0,8,16 * * 1-58 - cron: 45 4 * * 0,69 - cron: 45 4,12,20 * * 1-510 - cron: 45 12 * * 0,611 - cron: 29 8 * * * # about 1:29am PDT, for mem leak check and rerun disabled tests14 - ciflow/periodic/*16 - release/*20 …-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && git…21 cancel-in-progress: true23 permissions: read-all[all …]
28 An object t is considered a cuda-tensor if:31 A cuda-tensor provides a tensor description dict:34 typestr: (str) A numpy-style typestr.35 data: (int, boolean) A (data_ptr, read-only) tuple.39 https://numba.pydata.org/numba-doc/latest/cuda/cuda_array_interface.html99 # typestr from numpy, cuda-native little-endian149 # float16 comparisons, which aren't supported cpu-side.195 # Device-status overrides gradient status.217 @unittest.skipIf(not TEST_MULTIGPU, "No multigpu")228 # Tensors on non-default device raise api error if converted[all …]
58 H(x)[i] = conj(x[-i])61 mid = (x.size(dim) - 1) // 268 idx_neg[dim] = slice(-mid, None)81 # Decompose into Hermitian (FFT of real) and anti-Hermitian (FFT of imaginary)82 n_fft = x.size(-2)85 hconj = _hermitian_conj(x, dim=-2)87 x_antihermitian = (x - hconj) / 289 istft_imag = torch.istft(-1j * x_antihermitian[slc], *args, **kwargs, onesided=True)98 .. math:: X(m, \omega) = \sum_n x[n]w[n - m] e^{-jn\omega}102 X = torch.empty((n_fft, (x.numel() - n_fft + hop_length) // hop_length),[all …]
1 name: linux-test4 build-environment:7 description: Top-level label for what's being built/tested.8 test-matrix:11 description: JSON description of what test configs to run.12 docker-image:16 sync-tag:22 job with the same `sync-tag` is identical.23 use-gha:28 dashboard-tag:[all …]
1 # mypy: ignore-errors67 "multi-gpu-1": TestSkip(75, "Need at least 1 CUDA device"),68 "multi-gpu-2": TestSkip(77, "Need at least 2 CUDA devices"),69 "multi-gpu-3": TestSkip(80, "Need at least 3 CUDA devices"),70 "multi-gpu-4": TestSkip(81, "Need at least 4 CUDA devices"),71 "multi-gpu-5": TestSkip(82, "Need at least 5 CUDA devices"),72 "multi-gpu-6": TestSkip(83, "Need at least 6 CUDA devices"),73 "multi-gpu-7": TestSkip(84, "Need at least 7 CUDA devices"),74 "multi-gpu-8": TestSkip(85, "Need at least 8 CUDA devices"),76 "skipIfRocm": TestSkip(78, "Test skipped for ROCm"),[all …]
58 """Multigpu tests are designed to simulate the multi nodes with multi99 fs.set("-key3", "7")100 self.assertEqual(b"7", fs.get("-key3"))101 fs.delete_key("-key3")129 with self.assertRaisesRegex(RuntimeError, "[t -i]imeout"):173 # property instead of hardcoding in the test since some Store308 ) # type: ignore[call-arg] # noqa: F841311 ) # type: ignore[call-arg] # noqa: F841315 def test_repr(self) -> None:348 # We internally use a multi-tenant TCP store. Both PG and RPC should successfully[all …]
64 """Multigpu tests are designed to simulate the multi nodes with multi95 c2p.append(float(tok - tik))139 # we need to create a separate store just for the store barrier test154 if rank != world_size - 1:188 # we expect the world_size-1 threads to have failed189 self.assertEqual(len(error_list), world_size - 1)200 def __init__(self) -> None:283 …. But I don't want to appeal to the weights' devices directly, because part of this test's purpose293 def __init__(self) -> None:302 def __init__(self) -> None:[all …]
82 * Alternatively, NV-CONTROL versions 1.27 and greater allow these102 * Q: The attribute is a 64-bit integer attribute; use the 64-bit versions139 * NV_CTRL_FLATPANEL_SCALING - not supported160 * NV_CTRL_DITHERING - the requested dithering configuration;174 * NV_CTRL_DIGITAL_VIBRANCE - sets the digital vibrance level for the181 * NV_CTRL_BUS_TYPE - returns the bus type through which the specified device187 #define NV_CTRL_BUS_TYPE 5 /* R--GI */194 * NV_CTRL_VIDEO_RAM - returns the total amount of memory available203 #define NV_CTRL_VIDEO_RAM 6 /* R--G */206 * NV_CTRL_IRQ - returns the interrupt request line used by the specified[all …]
... xla hash.", 13 "headRefName": "update-xla-commit-hash/5573005593-54-1", ...