Home
last modified time | relevance | path

Searched full:gpus (Results 1 – 25 of 1176) sorted by relevance

12345678910>>...48

/external/pytorch/benchmarks/distributed/ddp/
DREADME.md20 If you launch a single task per machine with multiple GPUs, consider
69 …1 GPUs -- no ddp: p50: 0.097s 329/s p75: 0.097s 329/s p90: 0.097s 329/s p95: …
70 …1 GPUs -- 1M/1G: p50: 0.100s 319/s p75: 0.100s 318/s p90: 0.100s 318/s p95: …
71 …2 GPUs -- 1M/2G: p50: 0.103s 310/s p75: 0.103s 310/s p90: 0.103s 310/s p95: …
72 …4 GPUs -- 1M/4G: p50: 0.103s 310/s p75: 0.103s 310/s p90: 0.103s 310/s p95: …
73 …8 GPUs -- 1M/8G: p50: 0.104s 307/s p75: 0.104s 307/s p90: 0.104s 306/s p95: …
74 …16 GPUs -- 2M/8G: p50: 0.104s 306/s p75: 0.104s 306/s p90: 0.104s 306/s p95:…
79 …1 GPUs -- no ddp: p50: 0.162s 197/s p75: 0.162s 197/s p90: 0.162s 197/s p95: …
80 …1 GPUs -- 1M/1G: p50: 0.171s 187/s p75: 0.171s 186/s p90: 0.171s 186/s p95: …
81 …2 GPUs -- 1M/2G: p50: 0.176s 182/s p75: 0.176s 181/s p90: 0.176s 181/s p95: …
[all …]
/external/tensorflow/third_party/
Dcudnn_frontend_header_fix.patch32 +#include "third_party/gpus/cudnn/cudnn.h"
46 +#include "third_party/gpus/cudnn/cudnn.h"
47 +#include "third_party/gpus/cudnn/cudnn_backend.h"
61 +#include "third_party/gpus/cudnn/cudnn.h"
62 +#include "third_party/gpus/cudnn/cudnn_backend.h"
76 +#include "third_party/gpus/cudnn/cudnn.h"
77 +#include "third_party/gpus/cudnn/cudnn_backend.h"
90 +#include "third_party/gpus/cudnn/cudnn.h"
104 +#include "third_party/gpus/cudnn/cudnn.h"
105 +#include "third_party/gpus/cudnn/cudnn_backend.h"
[all …]
/external/tensorflow/tensorflow/python/framework/
Dconfig_test.py381 gpus = config.list_physical_devices('GPU')
382 self.assertGreater(len(gpus), 0)
437 gpus = config.list_physical_devices('GPU')
438 self.assertGreater(len(gpus), 0)
461 config.set_visible_devices(gpus)
468 gpus = config.list_physical_devices('GPU')
469 if len(gpus) < 2:
470 self.skipTest('Need at least 2 GPUs')
474 for i in range(0, len(gpus)):
480 with ops.device('/device:GPU:' + str(len(gpus))):
[all …]
/external/tensorflow/tensorflow/tools/ci_build/
Dcuda-clang.patch1 diff --git a/third_party/gpus/crosstool/cc_toolchain_config.bzl.tpl b/third_party/gpus/crosstool/cc…
3 --- a/third_party/gpus/crosstool/cc_toolchain_config.bzl.tpl
4 +++ b/third_party/gpus/crosstool/cc_toolchain_config.bzl.tpl
14 diff --git a/third_party/gpus/cuda_configure.bzl b/third_party/gpus/cuda_configure.bzl
16 --- a/third_party/gpus/cuda_configure.bzl
17 +++ b/third_party/gpus/cuda_configure.bzl
/external/armnn/
DInstallationViaAptRepository.md153 libarmnn-cpuref-backend23 - Arm NN is an inference engine for CPUs, GPUs and NPUs
154 libarmnn-cpuref-backend24 - Arm NN is an inference engine for CPUs, GPUs and NPUs
155 libarmnn-dev - Arm NN is an inference engine for CPUs, GPUs and NPUs
156 …libarmnntfliteparser-dev - Arm NN is an inference engine for CPUs, GPUs and NPUs # Note: removal o…
157 libarmnn-tfliteparser23 - Arm NN is an inference engine for CPUs, GPUs and NPUs
158 …libarmnntfliteparser24 - Arm NN is an inference engine for CPUs, GPUs and NPUs # Note: removal of …
159 …libarmnntfliteparser24.5 - Arm NN is an inference engine for CPUs, GPUs and NPUs # Note: removal o…
160 libarmnn23 - Arm NN is an inference engine for CPUs, GPUs and NPUs
161 libarmnn24 - Arm NN is an inference engine for CPUs, GPUs and NPUs
162 libarmnn25 - Arm NN is an inference engine for CPUs, GPUs and NPUs
[all …]
/external/tensorflow/tensorflow/
Dopensource_only.files220 third_party/gpus/BUILD:
221 third_party/gpus/crosstool/BUILD.rocm.tpl:
222 third_party/gpus/crosstool/BUILD.tpl:
223 third_party/gpus/crosstool/BUILD:
224 third_party/gpus/crosstool/LICENSE:
225 third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc.tpl:
226 third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_rocm.tpl:
227 third_party/gpus/crosstool/windows/msvc_wrapper_for_nvcc.py.tpl:
228 third_party/gpus/cuda/BUILD.tpl:
229 third_party/gpus/cuda/BUILD.windows.tpl:
[all …]
/external/tensorflow/tensorflow/python/distribute/cluster_resolver/
DREADME_Slurm.md5 able to handle homogeneous and heterogeneous tasks as long as the number of GPUs
6 per node and task are the same. This means on nodes with 4 GPUs each it will be
33 The number of GPUs present on each node and number of GPUs for each tasks are
35 first (which is set by Slurm to a list of GPUs for the current node) and has a
36 fallback to using `nvidia-smi`. If this doesn't work or non-NVIDIA GPUs are used
37 those 2 values have to be specified by the user. By default allocated GPUs will
52 and 4 GPUs. `cluster_resolver.cluster_spec()` will return a cluster
59 `3`. Also GPUs will be allocated automatically, so the first job on each node
81 has to be done manually which is useful if e.g. GPUs 0 should go to the first
Dslurm_cluster_resolver.py135 """Gets the number of NVIDIA GPUs by using CUDA_VISIBLE_DEVICES and nvidia-smi.
138 Number of GPUs available on the node
147 output = subprocess.check_output(['nvidia-smi', '--list-gpus'],
151 raise RuntimeError('Could not get number of GPUs from nvidia-smi. '
156 """Returns the number of GPUs visible on the current node.
158 Currently only implemented for NVIDIA GPUs.
169 of GPUs on each node and number of GPUs for each task. It retrieves system
189 With the number of GPUs per node and per task it allocates GPUs to tasks by
193 long as the number of GPUs per task stays constant.
205 gpus_per_node: Number of GPUs available on each node. Defaults to the
[all …]
/external/angle/src/gpu_info_util/
DSystemInfo.cpp81 for (const GPUDeviceInfo &gpu : gpus) in hasNVIDIAGPU()
93 for (const GPUDeviceInfo &gpu : gpus) in hasIntelGPU()
105 for (const GPUDeviceInfo &gpu : gpus) in hasAMDGPU()
120 for (size_t i = 0; i < gpus.size(); ++i) in getPreferredGPUIndex()
122 std::string vendor = VendorName(gpus[i].vendorId); in getPreferredGPUIndex()
262 ASSERT(!info->gpus.empty()); in GetDualGPUInfo()
272 for (size_t i = 0; i < info->gpus.size(); ++i) in GetDualGPUInfo()
274 if (IsIntel(info->gpus[i].vendorId)) in GetDualGPUInfo()
278 if (IsIntel(info->gpus[active].vendorId)) in GetDualGPUInfo()
286 info->isOptimus = hasIntel && IsNVIDIA(info->gpus[active].vendorId); in GetDualGPUInfo()
[all …]
DSystemInfo_macos.mm54 // Gathers the vendor and device IDs for GPUs listed in the IORegistry.
158 if (info->gpus.size() < 2)
189 for (size_t i = 0; i < info->gpus.size(); ++i)
191 if (info->gpus[i].vendorId == activeVendor && info->gpus[i].deviceId == activeDevice)
326 GetIORegistryDevices(&info->gpus);
327 if (info->gpus.empty())
344 // offline renderers are allowed, and whether these two GPUs are really the
345 // integrated/discrete GPUs in a laptop.
346 if (info->gpus.size() == 2 &&
347 ((IsIntel(info->gpus[0].vendorId) && !IsIntel(info->gpus[1].vendorId)) ||
[all …]
DSystemInfo_linux.cpp74 if (!GetPCIDevicesWithLibPCI(&(info->gpus))) in GetSystemInfo()
84 if (info->gpus.size() == 0) in GetSystemInfo()
91 for (size_t i = 0; i < info->gpus.size(); ++i) in GetSystemInfo()
93 GPUDeviceInfo *gpu = &info->gpus[i]; in GetSystemInfo()
95 // New GPUs might be added inside this loop, don't query for their driver version again in GetSystemInfo()
129 if (IsIntel(gpu->vendorId) && info->gpus.size() == 1) in GetSystemInfo()
140 info->gpus.emplace_back(std::move(nvidiaInfo)); in GetSystemInfo()
/external/tensorflow/tensorflow/python/keras/mixed_precision/
Ddevice_compatibility_check.py90 warning_str += ('Some of your GPUs may run slowly with dtype policy '
92 'capability of at least 7.0. Your GPUs:\n')
98 warning_str += ('Your GPUs may run slowly with dtype policy '
100 'capability of at least 7.0. Your GPUs:\n')
104 warning_str += ('See https://developer.nvidia.com/cuda-gpus for a list of '
105 'GPUs and their compute capabilities.\n')
112 'this machine does not have a GPU. Only Nvidia GPUs with '
124 'Your GPUs will likely run quickly with dtype policy '
145 gpus = config.list_physical_devices('GPU')
146 gpu_details_list = [config.get_device_details(g) for g in gpus]
/external/tensorflow/third_party/gpus/cuda/
DBUILD.windows.tpl48 # Provides CUDA headers for '#include "third_party/gpus/cuda/include/cuda.h"'
56 include_prefix = "third_party/gpus",
89 include_prefix = "third_party/gpus/cuda/include",
98 include_prefix = "third_party/gpus/cuda/include",
107 include_prefix = "third_party/gpus/cuda/include",
116 include_prefix = "third_party/gpus/cuda/include",
125 include_prefix = "third_party/gpus/cuda/include",
158 include_prefix = "third_party/gpus/cudnn",
196 include_prefix = "third_party/gpus",
DBUILD.tpl48 # Provides CUDA headers for '#include "third_party/gpus/cuda/include/cuda.h"'
56 include_prefix = "third_party/gpus",
88 include_prefix = "third_party/gpus/cuda/include",
97 include_prefix = "third_party/gpus/cuda/include",
106 include_prefix = "third_party/gpus/cuda/include",
115 include_prefix = "third_party/gpus/cuda/include",
124 include_prefix = "third_party/gpus/cuda/include",
162 include_prefix = "third_party/gpus/cudnn",
202 include_prefix = "third_party/gpus",
/external/angle/src/tests/test_utils/
Dsystem_info_util.cpp24 if (systemInfo.gpus.size() < 2) in findGPU()
28 for (size_t i = 0; i < systemInfo.gpus.size(); ++i) in findGPU()
30 if (lowPower && IsIntel(systemInfo.gpus[i].vendorId)) in findGPU()
35 else if (!lowPower && !IsIntel(systemInfo.gpus[i].vendorId)) in findGPU()
61 for (size_t i = 0; i < systemInfo.gpus.size(); ++i) in FindActiveOpenGLGPU()
64 angle::SplitStringAlongWhitespace(VendorName(systemInfo.gpus[i].vendorId), &vendorTokens); in FindActiveOpenGLGPU()
/external/mesa3d/docs/drivers/
Dnvk.rst4 NVK is a Vulkan driver for NVIDIA GPUs.
9 NVK currently supports Turing (RTX 20XX and GTX 16XX) and later GPUs.
11 series) GPUs but anything pre-Turing is currently disabled by default.
22 GTX 16XX) and later GPUs.
65 GPUs Kepler and later, including GPUs for which hardware support is
/external/angle/src/feature_support_util/
Dfeature_support_util_unittest.cpp26 mSystemInfo.gpus.resize(1); in FeatureSupportUtilTest()
27 mSystemInfo.gpus[0].vendorId = 123; in FeatureSupportUtilTest()
28 mSystemInfo.gpus[0].deviceId = 234; in FeatureSupportUtilTest()
29 mSystemInfo.gpus[0].driverVendor = "GPUVendorA"; in FeatureSupportUtilTest()
30 mSystemInfo.gpus[0].detailedDriverVersion = {1, 2, 3, 4}; in FeatureSupportUtilTest()
130 "GPUs" : [ in TEST_F()
153 systemInfo.gpus[0].detailedDriverVersion = {1, 2, 3, 5}; in TEST_F()
/external/angle/src/tests/
Dangle_system_info_tests_main.cpp16 // "gpus": [
104 if (info.gpus.empty()) in main()
130 js::Value gpus; in main() local
131 gpus.SetArray(); in main()
133 for (const angle::GPUDeviceInfo &gpu : info.gpus) in main()
161 gpus.PushBack(obj, allocator); in main()
164 doc.AddMember("gpus", gpus, allocator); in main()
/external/mesa3d/docs/gallium/
Ddistro.rst24 Driver for the NVIDIA NV30 and NV40 families of GPUs.
29 Driver for the NVIDIA NV50 family of GPUs.
34 Driver for the NVIDIA NVC0 / Fermi family of GPUs.
44 Driver for the ATI/AMD R300, R400, and R500 families of GPUs.
49 Driver for the ATI/AMD R600, R700, Evergreen and Northern Islands families of GPUs.
54 Driver for the AMD Southern Islands family of GPUs.
59 Driver for Qualcomm Adreno 2xx, 3xx, and 4xx series of GPUs.
/external/pytorch/test/distributed/
Dtest_c10d_ops_nccl.py63 @skip_but_pass_in_sandcastle_if(not TEST_MULTIGPU, "NCCL test requires 2+ GPUs")
99 @skip_but_pass_in_sandcastle_if(not TEST_MULTIGPU, "NCCL test requires 2+ GPUs")
135 @skip_but_pass_in_sandcastle_if(not TEST_MULTIGPU, "NCCL test requires 2+ GPUs")
162 @skip_but_pass_in_sandcastle_if(not TEST_MULTIGPU, "NCCL test requires 2+ GPUs")
247 @skip_but_pass_in_sandcastle_if(not TEST_MULTIGPU, "NCCL test requires 2+ GPUs")
272 @skip_but_pass_in_sandcastle_if(not TEST_MULTIGPU, "NCCL test requires 2+ GPUs")
293 @skip_but_pass_in_sandcastle_if(not TEST_MULTIGPU, "NCCL test requires 2+ GPUs")
318 @skip_but_pass_in_sandcastle_if(not TEST_MULTIGPU, "NCCL test requires 2+ GPUs")
383 @skip_but_pass_in_sandcastle_if(not TEST_MULTIGPU, "NCCL test requires 2+ GPUs")
413 @skip_but_pass_in_sandcastle_if(not TEST_MULTIGPU, "NCCL test requires 2+ GPUs")
[all …]
/external/tensorflow/tensorflow/python/ops/
Dcompiled_collective_ops_gpu_test.py46 gpus = config.list_physical_devices('GPU')
47 if len(gpus) < num_gpus:
48 self.skipTest('Expected at least {} GPUs but found {} GPUs'.format(
49 num_gpus, len(gpus)))
/external/llvm/lib/Target/AMDGPU/TargetInfo/
DAMDGPUTargetInfo.cpp19 /// \brief The target which suports all AMD GPUs. This will eventually
22 /// \brief The target for GCN GPUs
28 R600(TheAMDGPUTarget, "r600", "AMD GPUs HD2XXX-HD6XXX"); in LLVMInitializeAMDGPUTargetInfo()
29 RegisterTarget<Triple::amdgcn, false> GCN(TheGCNTarget, "amdgcn", "AMD GCN GPUs"); in LLVMInitializeAMDGPUTargetInfo()
/external/tensorflow/tensorflow/python/distribute/
Dcross_device_ops_test.py158 gpu_per_process: number of GPUs (0 if no GPUs) used by each process.
333 "count. NCCL requires physical GPUs for every process.")
337 "process and GPU count. NCCL requires physical GPUs for "
380 "count. NCCL requires physical GPUs for every process.")
384 "process and GPU count. NCCL requires physical GPUs for "
465 "count. NCCL requires physical GPUs for every process.")
469 "process and GPU count. NCCL requires physical GPUs for "
513 "count. NCCL requires physical GPUs for every process.")
517 "process and GPU count. NCCL requires physical GPUs for "
642 "count. NCCL requires physical GPUs for every process.")
[all …]
/external/swiftshader/third_party/llvm-16.0/llvm/lib/Target/AMDGPU/TargetInfo/
DAMDGPUTargetInfo.cpp18 /// The target which supports all AMD GPUs. This will eventually
24 /// The target for GCN GPUs
33 "AMD GPUs HD2XXX-HD6XXX", "AMDGPU"); in LLVMInitializeAMDGPUTargetInfo()
35 "AMD GCN GPUs", "AMDGPU"); in LLVMInitializeAMDGPUTargetInfo()
/external/swiftshader/third_party/llvm-10.0/llvm/lib/Target/AMDGPU/TargetInfo/
DAMDGPUTargetInfo.cpp18 /// The target which supports all AMD GPUs. This will eventually
24 /// The target for GCN GPUs
33 "AMD GPUs HD2XXX-HD6XXX", "AMDGPU"); in LLVMInitializeAMDGPUTargetInfo()
35 "AMD GCN GPUs", "AMDGPU"); in LLVMInitializeAMDGPUTargetInfo()

12345678910>>...48