Home
last modified time | relevance | path

Searched +full:multigpu +full:- +full:test (Results 1 – 12 of 12) sorted by relevance

/external/pytorch/.github/workflows/
D_rocm-test.yml1 # TODO: this looks sort of similar to _linux-test, but there are like a dozen
5 name: test
10 build-environment:
13 description: Top-level label for what's being built/tested.
14 test-matrix:
17 description: JSON description of what test configs to run.
18 docker-image:
22 sync-tag:
28 job with the same `sync-tag` is identical.
29 timeout-minutes:
[all …]
D_linux-test.yml1 name: linux-test
6 build-environment:
9 description: Top-level label for what's being built/tested.
10 test-matrix:
13 description: JSON description of what test configs to run.
14 docker-image:
18 sync-tag:
24 job with the same `sync-tag` is identical.
25 timeout-minutes:
31 use-gha:
[all …]
Dperiodic.yml7 - cron: 45 0,8,16 * * 1-5
8 - cron: 45 4 * * 0,6
9 - cron: 45 4,12,20 * * 1-5
10 - cron: 45 12 * * 0,6
11 - cron: 29 8 * * * # about 1:29am PDT, for mem leak check and rerun disabled tests
14 - ciflow/periodic/*
16 - release/*
20-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && git…
21 cancel-in-progress: true
23 permissions: read-all
[all …]
/external/pytorch/test/
Dtest_numba_integration.py28 An object t is considered a cuda-tensor if:
31 A cuda-tensor provides a tensor description dict:
34 typestr: (str) A numpy-style typestr.
35 data: (int, boolean) A (data_ptr, read-only) tuple.
39 https://numba.pydata.org/numba-doc/latest/cuda/cuda_array_interface.html
99 # typestr from numpy, cuda-native little-endian
149 # float16 comparisons, which aren't supported cpu-side.
195 # Device-status overrides gradient status.
217 @unittest.skipIf(not TEST_MULTIGPU, "No multigpu")
228 # Tensors on non-default device raise api error if converted
[all …]
Dtest_spectral_ops.py58 H(x)[i] = conj(x[-i])
61 mid = (x.size(dim) - 1) // 2
68 idx_neg[dim] = slice(-mid, None)
81 # Decompose into Hermitian (FFT of real) and anti-Hermitian (FFT of imaginary)
82 n_fft = x.size(-2)
85 hconj = _hermitian_conj(x, dim=-2)
87 x_antihermitian = (x - hconj) / 2
89 istft_imag = torch.istft(-1j * x_antihermitian[slc], *args, **kwargs, onesided=True)
98 .. math:: X(m, \omega) = \sum_n x[n]w[n - m] e^{-jn\omega}
102 X = torch.empty((n_fft, (x.numel() - n_fft + hop_length) // hop_length),
[all …]
/external/pytorch/.github/actions/linux-test/
Daction.yml1 name: linux-test
4 build-environment:
7 description: Top-level label for what's being built/tested.
8 test-matrix:
11 description: JSON description of what test configs to run.
12 docker-image:
16 sync-tag:
22 job with the same `sync-tag` is identical.
23 use-gha:
28 dashboard-tag:
[all …]
/external/pytorch/torch/testing/_internal/
Dcommon_distributed.py1 # mypy: ignore-errors
67 "multi-gpu-1": TestSkip(75, "Need at least 1 CUDA device"),
68 "multi-gpu-2": TestSkip(77, "Need at least 2 CUDA devices"),
69 "multi-gpu-3": TestSkip(80, "Need at least 3 CUDA devices"),
70 "multi-gpu-4": TestSkip(81, "Need at least 4 CUDA devices"),
71 "multi-gpu-5": TestSkip(82, "Need at least 5 CUDA devices"),
72 "multi-gpu-6": TestSkip(83, "Need at least 6 CUDA devices"),
73 "multi-gpu-7": TestSkip(84, "Need at least 7 CUDA devices"),
74 "multi-gpu-8": TestSkip(85, "Need at least 8 CUDA devices"),
76 "skipIfRocm": TestSkip(78, "Test skipped for ROCm"),
[all …]
/external/pytorch/test/distributed/
Dtest_store.py58 """Multigpu tests are designed to simulate the multi nodes with multi
99 fs.set("-key3", "7")
100 self.assertEqual(b"7", fs.get("-key3"))
101 fs.delete_key("-key3")
129 with self.assertRaisesRegex(RuntimeError, "[t -i]imeout"):
173 # property instead of hardcoding in the test since some Store
308 ) # type: ignore[call-arg] # noqa: F841
311 ) # type: ignore[call-arg] # noqa: F841
315 def test_repr(self) -> None:
348 # We internally use a multi-tenant TCP store. Both PG and RPC should successfully
[all …]
Dtest_c10d_common.py64 """Multigpu tests are designed to simulate the multi nodes with multi
95 c2p.append(float(tok - tik))
139 # we need to create a separate store just for the store barrier test
154 if rank != world_size - 1:
188 # we expect the world_size-1 threads to have failed
189 self.assertEqual(len(error_list), world_size - 1)
200 def __init__(self) -> None:
283 …. But I don't want to appeal to the weights' devices directly, because part of this test's purpose
293 def __init__(self) -> None:
302 def __init__(self) -> None:
[all …]
/external/angle/src/third_party/libXNVCtrl/
DNVCtrl.h82 * Alternatively, NV-CONTROL versions 1.27 and greater allow these
102 * Q: The attribute is a 64-bit integer attribute; use the 64-bit versions
139 * NV_CTRL_FLATPANEL_SCALING - not supported
160 * NV_CTRL_DITHERING - the requested dithering configuration;
174 * NV_CTRL_DIGITAL_VIBRANCE - sets the digital vibrance level for the
181 * NV_CTRL_BUS_TYPE - returns the bus type through which the specified device
187 #define NV_CTRL_BUS_TYPE 5 /* R--GI */
194 * NV_CTRL_VIDEO_RAM - returns the total amount of memory available
203 #define NV_CTRL_VIDEO_RAM 6 /* R--G */
206 * NV_CTRL_IRQ - returns the interrupt request line used by the specified
[all …]
/external/pytorch/.github/scripts/
Drockset_mocks.json.gz
Dgql_mocks.json.gz ... xla hash.", 13 "headRefName": "update-xla-commit-hash/5573005593-54-1", ...