Home
last modified time | relevance | path

Searched +full:- +full:- +full:list +full:- +full:gpus (Results 1 – 25 of 583) sorted by relevance

12345678910>>...24

/external/tensorflow/tensorflow/python/keras/mixed_precision/
Ddevice_compatibility_check.py7 # http://www.apache.org/licenses/LICENSE-2.0
28 'running a multi-worker model, you can ignore this warning. This message '
41 device_strs: A list of strings, each representing a device.
49 num = len(list(vals))
64 gpu_details_list: A list of dicts, one dict per GPU. Each dict
90 warning_str += ('Some of your GPUs may run slowly with dtype policy '
92 'capability of at least 7.0. Your GPUs:\n')
98 warning_str += ('Your GPUs may run slowly with dtype policy '
100 'capability of at least 7.0. Your GPUs:\n')
104 warning_str += ('See https://developer.nvidia.com/cuda-gpus for a list of '
[all …]
/external/mesa3d/docs/drivers/
Dnvk.rst4 NVK is a Vulkan driver for NVIDIA GPUs.
7 ----------------
9 NVK currently supports Turing (RTX 20XX and GTX 16XX) and later GPUs.
11 series) GPUs but anything pre-Turing is currently disabled by default.
14 -------------------
19 -------------------
22 GTX 16XX) and later GPUs.
25 ---------
32 a comma-separated list of named flags affecting the NVK back-end shader
48 a comma-separated list of named flags, which do various things:
[all …]
Dradv.rst4 RADV is a Vulkan driver for AMD GCN/RDNA GPUs.
7 ------------
9 RADV is a userspace driver that implements the Vulkan API on most modern AMD GPUs.
24 All GCN and RDNA GPUs that are supported by the Linux kernel (and capable of graphics)
26 We are always working on supporting the very latest GPUs too.
30 * GFX6-7 (GCN 1-2): Vulkan 1.3
31 * GFX8 and newer (GCN 3-5 and RDNA): Vulkan 1.4
33 `The exact list of Vulkan conformant products can be seen here. <https://www.khronos.org/conformanc…
44 `see the src/amd/common/amd_family.h file <https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/src…
46 Note that for GFX6-7 (GCN 1-2) GPUs, the ``amdgpu`` kernel driver is currently not the default in L…
[all …]
/external/tensorflow/tensorflow/python/distribute/cluster_resolver/
Dslurm_cluster_resolver.py1 # Copyright 2018-2020 The TensorFlow Authors. All Rights Reserved.
7 # http://www.apache.org/licenses/LICENSE-2.0
28 """Create a list of hosts out of a SLURM hostlist.
31 Input: 'n[1-2],m5,o[3-4,6,7-9]')
36 """Split hostlist at commas outside of range expressions ('[3-5]')."""
56 """Expand a range expression like '3-5' to values 3,4,5."""
58 sub_range = part.split('-')
101 raise ValueError('Invalid tasks-per-node list format "%s": %s' %
135 """Gets the number of NVIDIA GPUs by using CUDA_VISIBLE_DEVICES and nvidia-smi.
138 Number of GPUs available on the node
[all …]
DREADME_Slurm.md5 able to handle homogeneous and heterogeneous tasks as long as the number of GPUs
6 per node and task are the same. This means on nodes with 4 GPUs each it will be
33 The number of GPUs present on each node and number of GPUs for each tasks are
35 first (which is set by Slurm to a list of GPUs for the current node) and has a
36 fallback to using `nvidia-smi`. If this doesn't work or non-NVIDIA GPUs are used
37 those 2 values have to be specified by the user. By default allocated GPUs will
43 - Slurm allocation in shell `salloc --nodes=2 -t 01:30:00 --ntasks-per-node=2
44 --gres=gpu:k80:4 --exclusive`
45 - Run the example `srun python tf_example.py`
46 - Creating cluster in Python `import tensorflow as tf cluster_resolver =
[all …]
/external/tensorflow/tensorflow/tools/docs/
Dtf_doctest.py7 # http://www.apache.org/licenses/LICENSE-2.0
29 from tensorflow.python.distribute import distribution_strategy_context # pylint: disable=unused-im…
36 import doctest # pylint: disable=g-bad-import-order
45 flags.DEFINE_list('module', [], 'A list of specific module to run doctest on.')
47 'A list of modules to ignore when resolving modules.')
48 flags.DEFINE_boolean('list', None,
49 'List all the modules in the core package imported.')
51 'The number of GPUs required for the tests.')
53 # Both --module and --module_prefix_skip are relative to PACKAGE.
61 """Recursively imports all the sub-modules under a root package.
[all …]
/external/tensorflow/tensorflow/python/framework/
Dconfig_test.py7 # http://www.apache.org/licenses/LICENSE-2.0
64 # well as any init-time configs.
115 # well as any init-time configs.
344 config.set_optimizer_experimental_options({'min_graph_nodes': -1})
345 options['min_graph_nodes'] = -1
381 gpus = config.list_physical_devices('GPU')
382 self.assertGreater(len(gpus), 0)
437 gpus = config.list_physical_devices('GPU')
438 self.assertGreater(len(gpus), 0)
461 config.set_visible_devices(gpus)
[all …]
Dconfig.py7 # http://www.apache.org/licenses/LICENSE-2.0
29 """Returns whether TensorFloat-32 is enabled.
31 By default, TensorFloat-32 is enabled, but this can be changed with
35 True if TensorFloat-32 is enabled (the default) and False otherwise
42 """Enable or disable the use of TensorFloat-32 on supported hardware.
44 [TensorFloat-32](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format),
45 or TF32 for short, is a math mode for NVIDIA Ampere GPUs. TensorFloat-32
47 convolutions, to run much faster on Ampere GPUs but with reduced precision.
51 TensorFloat-32 is enabled by default. TensorFloat-32 is only supported on
52 Ampere GPUs, so all other hardware will use the full float32 precision
[all …]
/external/angle/src/feature_support_util/
Dfeature_support_util.cpp3 // Use of this source code is governed by a BSD-style license that can be
21 #include <list>
55 // - Objects, which are a set of comma-separated string:value pairs (note the recursive nature)
56 // - Arrays, which are a set of comma-separated values.
60 // The JSON identifier for the top-level set of rules. This is an object, the value of which is an
63 // most-recent answer is the final answer.
66 // identifier (i.e. "Rule") as the string and the value is a user-firendly description of the rule:
68 // Within a rule, the JSON identifier for the answer--whether or not to use ANGLE. The value is a
84 // manufacturer of the device. The value is a string. If any other non-GPU attributes will be
92 // one or more GPUs/drivers used in the device. The value is an
[all …]
/external/angle/src/gpu_info_util/
DSystemInfo_linux.cpp3 // Use of this source code is governed by a BSD-style license that can be
7 // SystemInfo_linux.cpp: implementation of the Linux-specific parts of SystemInfo.h
74 if (!GetPCIDevicesWithLibPCI(&(info->gpus))) in GetSystemInfo()
84 if (info->gpus.size() == 0) in GetSystemInfo()
91 for (size_t i = 0; i < info->gpus.size(); ++i) in GetSystemInfo()
93 GPUDeviceInfo *gpu = &info->gpus[i]; in GetSystemInfo()
95 // New GPUs might be added inside this loop, don't query for their driver version again in GetSystemInfo()
96 if (!gpu->driverVendor.empty()) in GetSystemInfo()
101 if (IsAMD(gpu->vendorId)) in GetSystemInfo()
106 gpu->driverVendor = "AMD (Brahma)"; in GetSystemInfo()
[all …]
/external/tensorflow/tensorflow/python/distribute/
Dcross_device_ops_test.py7 # http://www.apache.org/licenses/LICENSE-2.0
62 value: a tensor-convertible value or a `IndexedSlicesValue`, or a callable
65 devices: a list of device strings to create `PerReplica` values on.
74 elif isinstance(value, list):
158 gpu_per_process: number of GPUs (0 if no GPUs) used by each process.
162 of `CollectiveAllReduce`, devices are a list of local devices (str)
184 """An utility to convert a `Mirrored`, `Tensor` or `IndexedSlices` to a list.
189 makes collective ops of non-primary device being pruned, and will eventually
197 A list of `Tensor` or `IndexedSlices`.
208 RunOptions = collections.namedtuple( # pylint: disable=invalid-name
[all …]
Dcollective_all_reduce_strategy_test.py7 # http://www.apache.org/licenses/LICENSE-2.0
70 # different number of GPUs than the number of physical devices.
153 gen_math_ops.mat_mul(x, kernel), []) - constant_op.constant(1.)
164 ret = list(zip(grads, var_list))
174 # Run forward & backward to get gradients, variables list.
183 # TODO(yuefengz): support non-Mirrored variable as destinations.
206 error_before = abs(before - 1)
207 error_after = abs(after - 1)
234 np.allclose(x_value, reduced_x_value, atol=1e-5),
261 self.assertCountEqual(list(expected_value), list(computed_value))
[all …]
Dmirrored_strategy.py7 # http://www.apache.org/licenses/LICENSE-2.0
55 """Checks whether the devices list is for single or multi-worker.
58 devices: a list of device strings or tf.config.LogicalDevice objects, for
81 "Devices cannot have mixed list of device strings "
95 """Returns a device list given a cluster spec."""
111 """Groups the devices list by task_type and task_id.
114 devices: a list of device strings for remote devices.
117 a dict of list of device strings mapping from task_type to a list of devices
130 # Fill the device list for task_type until it covers the task_id.
144 """Infers the number of GPUs on each worker.
[all …]
/external/google-cloud-java/java-compute/proto-google-cloud-compute-v1/src/main/java/com/google/cloud/compute/v1/
DAcceleratorConfigOrBuilder.java8 * https://www.apache.org/licenses/LICENSE-2.0
55-project/zones/us-central1-c/acceleratorTypes/nvidia-tesla-p100 If you are creating an instance te…
67-project/zones/us-central1-c/acceleratorTypes/nvidia-tesla-p100 If you are creating an instance te…
79-project/zones/us-central1-c/acceleratorTypes/nvidia-tesla-p100 If you are creating an instance te…
/external/pytorch/torch/xpu/
Drandom.py1 # mypy: allow-untyped-defs
2 from typing import Iterable, List, Union
10 def get_rng_state(device: Union[int, str, torch.device] = "xpu") -> Tensor:
32 def get_rng_state_all() -> List[Tensor]:
33 r"""Return a list of ByteTensor representing the random number states of all devices."""
42 ) -> None:
67 def set_rng_state_all(new_states: Iterable[Tensor]) -> None:
77 def manual_seed(seed: int) -> None:
86 If you are working with a multi-GPU model, this function is insufficient
87 to get determinism. To seed all GPUs, use :func:`manual_seed_all`.
[all …]
/external/pytorch/torch/cuda/
Drandom.py1 # mypy: allow-untyped-defs
2 from typing import Iterable, List, Union
23 def get_rng_state(device: Union[int, str, torch.device] = "cuda") -> Tensor:
45 def get_rng_state_all() -> List[Tensor]:
46 r"""Return a list of ByteTensor representing the random number states of all devices."""
55 ) -> None:
80 def set_rng_state_all(new_states: Iterable[Tensor]) -> None:
90 def manual_seed(seed: int) -> None:
100 If you are working with a multi-GPU model, this function is insufficient
101 to get determinism. To seed all GPUs, use :func:`manual_seed_all`.
[all …]
/external/armnn/
DInstallationViaAptRepository.md4 * [Add the Ubuntu Launchpad PPA to your system](#add-the-ubuntu-launchpad-ppa-to-your-system)
5 * [Outline of available packages](#outline-of-available-packages)
9 * [Install desired combination of packages](#install-desired-combination-of-packages)
10 * [Installation of specific ABI versioned packages](#installation-of-specific-abi-versioned-package…
11 * [Uninstall packages](#uninstall-packages)
22 * Add the PPA to your sources using a command contained in software-properties-common package:
24 sudo apt install software-properties-common
25 sudo add-apt-repository ppa:armnn/ppa
43 libarmnn-cpuref-backend{ARMNN_MAJOR_VERSION}_{ARMNN_RELEASE_VERSION}-{PACKAGE_VERSION}_amd64.deb
44 libarmnntfliteparser{ARMNN_MAJOR_VERSION}_{ARMNN_RELEASE_VERSION}-{PACKAGE_VERSION}_amd64.deb
[all …]
/external/pytorch/test/scripts/
Drun_cuda_memcheck.py3 """This script runs cuda-memcheck on the specified unit test. Each test case
12 Note that running cuda-memcheck could be very slow.
29 GPUS = torch.cuda.device_count() variable
32 parser = argparse.ArgumentParser(description="Run isolated cuda-memcheck on unit tests")
42 "--strict",
45 "cublas/cudnn does not run error-free under cuda-memcheck, and ignoring these errors",
48 "--nproc",
54 "--gpus",
56 …help='GPU assignments for each process, it could be "all", or : separated list like "1,2:3,4:5,6"',
59 "--ci",
[all …]
/external/pytorch/.github/actions/setup-rocm/
Daction.yml8 - name: Set DOCKER_HOST
10 run: echo "DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock" >> "${GITHUB_ENV}"
12 - name: Remove leftover Docker config file
14 continue-on-error: true
16 set -ex
19 …/stackoverflow.com/questions/64455468/error-when-logging-into-ecr-with-docker-login-error-saving-c…
20 rm -f ~/.docker/config.json
22 - name: Stop all running docker containers
26 # ignore expansion of "docker ps -q" since it could be empty
28 docker stop $(docker ps -q) || true
[all …]
/external/pytorch/torch/futures/
D__init__.py1 # mypy: allow-untyped-defs
4 from typing import cast, Callable, Generic, List, Optional, Type, TypeVar, Union
14 class _PyFutureMeta(type(torch._C.Future), type(Generic)): # type: ignore[misc, no-redef]
27 def __init__(self, *, devices: Optional[List[Union[int, str, torch.device]]] = None):
37 devices(``List[Union[int, str, torch.device]]``, optional): the set
45 def done(self) -> bool:
50 If the value contains tensors that reside on GPUs, ``Future.done()``
58 def wait(self) -> T:
62 If the value contains tensors that reside on GPUs, then an additional
65 non-blocking, which means that ``wait()`` will insert the necessary
[all …]
/external/pytorch/torch/nn/parallel/
Dcomm.py1 # mypy: allow-untyped-defs
3 from typing import List
24 out (Sequence[Tensor], optional, keyword-only): the GPU tensors to
31 - If :attr:`devices` is specified,
34 - If :attr:`out` is specified,
51 """Broadcast a sequence of tensors to the specified GPUs.
71 """Sum tensors from multiple GPUs.
89 assert inp.device.type != "cpu", "reduce_add expects all inputs to be on GPUs"
122 """Sum tensors from multiple GPUs.
140 dense_tensors: List[List] = [[] for _ in inputs] # shape (num_gpus, num_tensors)
[all …]
Ddata_parallel.py1 # mypy: allow-untyped-defs
5 from typing import Any, Dict, Generic, List, Optional, Sequence, Tuple, TypeVar, Union
23 def _check_balance(device_ids: Sequence[Union[int, torch.device]]) -> None:
25 There is an imbalance between your GPUs. You may want to exclude GPU {} which
62 The batch size should be larger than the number of GPUs used.
66 instead of this class, to do multi-GPU training, even if there is only a single
67 node. See: :ref:`cuda-nn-ddp-instead` and :ref:`ddp`.
71 **scattered** on dim specified (default 0). tuple, list and dict types will
87 the base parallelized :attr:`module`. So **in-place** updates to the
104 When :attr:`module` returns a scalar (i.e., 0-dimensional tensor) in
[all …]
/external/libopus/dnn/training_tf2/
Dtest_plc.py2 '''Copyright (c) 2021-2022 Amazon
3 Copyright (c) 2018-2019 Mozilla
9 - Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
12 - Redistributions in binary form must reproduce the above copyright
13 notice, this list of conditions and the following disclaimer in the
39 parser.add_argument('--model', metavar='<model>', default='lpcnet_plc', help='PLC model python defi…
42 parser.add_argument('--gru-size', metavar='<units>', default=256, type=int, help='number of units i…
43 parser.add_argument('--cond-size', metavar='<units>', default=128, type=int, help='number of units …
59 #gpus = tf.config.experimental.list_physical_devices('GPU')
[all …]
/external/pytorch/test/distributed/
Dtest_c10d_common.py65 GPUs on each node. Nccl backend requires equal #GPUs in each process.
66 On a single node, all visible GPUs are evenly
69 visible_devices = list(range(torch.cuda.device_count()))
95 c2p.append(float(tok - tik))
154 if rank != world_size - 1:
188 # we expect the world_size-1 threads to have failed
189 self.assertEqual(len(error_list), world_size - 1)
200 def __init__(self) -> None:
215 def __init__(self, gpus): argument
217 self.fc1 = nn.Linear(2, 10, bias=False).to(gpus[0])
[all …]
/external/python/google-api-python-client/docs/dyn/
Dml_v1.projects.jobs.html8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
16 font-size: 13px;
21 font-size: 26px;
22 margin-bottom: 1em;
26 font-size: 24px;
27 margin-bottom: 1em;
[all …]

12345678910>>...24