Searched refs:replicas (Results 1 – 19 of 19) sorted by relevance
/external/tensorflow/tensorflow/compiler/xla/service/ |
D | service.cc | 415 TF_ASSIGN_OR_RETURN(auto replicas, Replicas(*backend, device_handles[i])); in ExecuteParallelAndRegisterResult() 416 CHECK_EQ(replicas.size(), arguments[i].size()); in ExecuteParallelAndRegisterResult() 417 for (int64 replica = 0; replica < replicas.size(); ++replica) { in ExecuteParallelAndRegisterResult() 418 device_assignment(replica, i) = replicas[replica]->device_ordinal(); in ExecuteParallelAndRegisterResult() 424 TF_ASSIGN_OR_RETURN(auto replicas, Replicas(*backend, device_handles[i])); in ExecuteParallelAndRegisterResult() 425 CHECK_EQ(replicas.size(), arguments[i].size()); in ExecuteParallelAndRegisterResult() 427 for (int64 replica = 0; replica < replicas.size(); ++replica) { in ExecuteParallelAndRegisterResult() 429 backend->BorrowStream(replicas[replica])); in ExecuteParallelAndRegisterResult() 525 TF_ASSIGN_OR_RETURN(auto replicas, Replicas(*backend, device_handle)); in ExecuteAndRegisterResult() 526 TF_RET_CHECK(!replicas.empty()); in ExecuteAndRegisterResult() [all …]
|
D | hlo.proto | 210 // Describes how parameters behave with regards to replicas.
|
/external/tensorflow/tensorflow/core/api_def/base_api/ |
D | api_def_AllToAll.pbtxt | 49 summary: "An Op to exchange data across TPU replicas." 52 `split_dimension` and send to the other replicas given group_assignment. After 53 receiving `split_count` - 1 blocks from other replicas, we concatenate the 56 For example, suppose there are 2 TPU replicas:
|
D | api_def_TPUReplicate.pbtxt | 13 additional arguments to broadcast to all replicas. The 41 the number of replicas of the computation to run. 84 replicas.
|
D | api_def_TPUReplicateMetadata.pbtxt | 7 Number of replicas of the computation
|
/external/tensorflow/tensorflow/python/distribute/ |
D | input_lib.py | 135 replicas = [] 148 replicas.append(next_element) 184 lambda: replicas[i][j], 189 replicas = results 195 flattened_replicas = nest.flatten(replicas) 198 replicas = nest.pack_sequence_as(replicas, flattened_replicas) 200 return values.regroup(self._input_workers.device_map, replicas)
|
/external/autotest/site_utils/ |
D | check_slave_db_delay.py | 102 for replica in options.replicas: 104 if not options.replicas:
|
/external/tensorflow/tensorflow/python/tpu/ |
D | device_assignment.py | 50 for core, replicas in core_to_replicas.items(): 51 core_to_sorted_replicas[core] = sorted(replicas)
|
D | tpu.py | 824 replicas = [flat_inputs[replica][i] for replica in xrange(num_replicas)] 826 tpu_ops.tpu_replicated_input(replicas, name="input{}".format(i)))
|
/external/tensorflow/tensorflow/compiler/xrt/ |
D | xrt.proto | 20 // As many replicas as there are in the replicated computation. 30 // The number of replicas the computation will be run on. If this is
|
/external/tensorflow/tensorflow/core/protobuf/tpu/ |
D | optimization_parameters.proto | 214 // for storing the replicas of hot IDs for the embedding table. In future, the 215 // number of replicas for a particular hot ID could be adjusted based on its 216 // frequency. The max_slot_count value captures the total number of replicas
|
/external/autotest/ |
D | README.md | 32 * Infrastructure to set up miniature replicas of a full lab. A full lab does
|
/external/tensorflow/tensorflow/compiler/xla/ |
D | xla_data.proto | 307 // number of replicas. 347 // ComputationDevice represents the device ids assinged to the replicas. 609 // The ids of the replicas that belongs to the same group. The ordering of the 635 // Describes whether all data-parallelism replicas will receive the same
|
D | xla.proto | 299 // Number of replicas of the computation to run. If zero, uses the default 300 // number of replicas for the XLA service.
|
/external/tensorflow/tensorflow/compiler/xla/g3doc/ |
D | operation_semantics.md | 49 all replicas belong to one group in the order of 0 - (n-1). Alltoall will be 81 : : : replicas; otherwise, this : 83 : : : of replicas in each group. : 458 replicas. 786 Computes a sum across replicas. 792 `operand` | `XlaOp` | Array to sum across replicas. 796 replicas and the operand has the value `(1.0, 2.5)` and `(3.0, 5.25)` 797 respectively on the two replicas, then the output value from this op will be 798 `(4.0, 7.75)` on both replicas. 801 either be empty (all replicas belong to a single group), or contain the same [all …]
|
/external/python/cpython2/Doc/library/ |
D | turtle.rst | 1409 Both are replicas of the corresponding TurtleScreen methods.
|
/external/jline/src/src/test/resources/jline/example/ |
D | english.gz |
|
/external/cldr/tools/java/org/unicode/cldr/util/data/transforms/ |
D | internal_raw_IPA.txt | 137526 replicas %32411 rˈɛpləkəz
|
D | internal_raw_IPA-old.txt | 164102 replicas %21841 rˈɛpləkəz
|