/external/tensorflow/tensorflow/core/api_def/base_api/ |
D | api_def_TPUReplicatedOutput.pbtxt | 4 summary: "Connects N outputs from an N-way replicated TPU computation." 6 This operation holds a replicated output from a `tpu.replicate()` computation subgraph. 7 Each replicated output has the same shape and type alongside the input. 14 The above computation has a replicated output of two replicas.
|
D | api_def_TPUReplicatedInput.pbtxt | 4 summary: "Connects N inputs to an N-way replicated TPU computation." 6 This operation holds a replicated input to a `tpu.replicate()` computation subgraph. 7 Each replicated input has the same shape and type alongside the output. 16 The above computation has a replicated input of two replicas.
|
D | api_def_TPUPartitionedInput.pbtxt | 19 those inputs are replicated.
|
D | api_def_CollectivePermute.pbtxt | 29 summary: "An Op to permute tensors across replicated TPU instances."
|
D | api_def_TPUReplicateMetadata.pbtxt | 46 summary: "Metadata indicating how the TPU computation should be replicated."
|
D | api_def_CrossReplicaSum.pbtxt | 30 summary: "An Op to sum inputs across replicated TPU instances."
|
D | api_def_Tile.pbtxt | 19 and the values of `input` are replicated `multiples[i]` times along the 'i'th
|
/external/tensorflow/tensorflow/compiler/mlir/tensorflow/tests/ |
D | tf_device_ops_invalid.mlir | 13 // Check that a replicate replicated inputs where operand sizes do not match 17 // expected-error@-1 {{'tf_device.replicate' expects number of operands for replicated input 0 to b… 129 // Check number of replicated inputs is evenly divisible by 'n'. 132 // expected-error@-1 {{'tf_device.replicate' op expects number of replicated inputs (4) to be evenl… 140 // Check number of replicated inputs / 'n' + number of packed inputs matches the 144 …ate' op expects number of block arguments (2) to be equal to number of replicated inputs (3) / 'n'… 152 // Check that a replicate with incompatible replicated operand and block
|
D | tpu-dynamic-layout-pass.mlir | 3 // Tests that the pass can transform non-replicated execution. 235 // Tests that the pass can transform replicated execution. 237 // CHECK: func @replicated(%[[ARG0:.*]]: tensor<*x!tf.resource> {tf.device = "/device:CPU:0"}) -> t… 238 func @replicated(%arg0: tensor<*x!tf.resource> {tf.device = "/device:CPU:0"}) -> tensor<i32> { 280 // Tests that the pass can transform replicated execution with packed inputs. 317 // Tests that the pass can transform replicated execution with both replicated 320 // CHECK: func @replicated(%[[ARG0:.*]]: tensor<*x!tf.resource> {tf.device = "/device:CPU:0"}) -> t… 321 func @replicated(%arg0: tensor<*x!tf.resource> {tf.device = "/device:CPU:0"}, %arg1: tensor<*x!tf.r…
|
D | annotate-parameter-replication.mlir | 3 // Tests that an operand from outside the replicated region is annotated. 68 // Tests that a non-replicated ClusterFuncOp is not annotated.
|
/external/tensorflow/tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util/ |
D | argument-sharding.mlir | 37 // CHECK-SAME: {replicated} 44 // CHECK-SAME: sharding={replicated}
|
D | result-sharding.mlir | 38 // CHECK-SAME: {replicated}
|
/external/mesa3d/src/panfrost/util/ |
D | pan_lower_framebuffer.c | 160 nir_ssa_def *replicated[4]; in pan_pack_pure_32() local 163 replicated[i] = nir_channel(b, v, i % v->num_components); in pan_pack_pure_32() 165 return nir_vec(b, replicated, 4); in pan_pack_pure_32() 180 nir_ssa_def *replicated[4]; in pan_pack_pure_16() local 190 replicated[i] = nir_pack_32_2x16(b, nir_vec(b, parts, 2)); in pan_pack_pure_16() 193 return nir_vec(b, replicated, 4); in pan_pack_pure_16() 252 nir_ssa_def *replicated[4] = { v, v, v, v }; in pan_replicate_4() local 253 return nir_vec(b, replicated, 4); in pan_replicate_4()
|
/external/tensorflow/tensorflow/compiler/xla/service/ |
D | hlo_sharding.h | 292 explicit HloSharding(bool manual, bool replicated, in HloSharding() argument 294 : replicated_(replicated), in HloSharding() 295 maximal_(replicated), in HloSharding()
|
D | hlo_sharding_test.cc | 104 auto* replicated = proto.add_tuple_shardings(); in TEST_F() local 105 replicated->set_type(OpSharding::REPLICATED); in TEST_F() 106 *replicated->add_metadata() = GetMetadata("c"); in TEST_F()
|
/external/tensorflow/tensorflow/compiler/mlir/tensorflow/ir/ |
D | tf_device_ops.td | 180 let summary = "Wraps an N-way replicated computation."; 183 The region held by this operation represents a computation that is replicated 187 from the outer scope. The device name map specifies devices on which replicated 197 the associated replicated device from `devices` if the tf_device.launch refers 201 Operands are replicated inputs and packed inputs. 205 the operands are matching in order the `devices` attribute. Each replicated 210 Operands not replicated can be implicitly captured by ops in the region. Results 211 are replicated each from the regions terminator.
|
/external/llvm-project/llvm/test/CodeGen/SystemZ/ |
D | vec-move-19.ll | 3 ; Test that a loaded value which is replicated is not inserted also in any
|
D | vec-const-13.ll | 139 ; ...and again with the lower bits of the replicated constant. 161 ; ...and again with the lower bits of the replicated constant.
|
/external/llvm-project/mlir/test/Conversion/StandardToLLVM/ |
D | convert-argattrs.mlir | 4 // When expanding the memref to multiple arguments, argument attributes are replicated.
|
/external/tensorflow/tensorflow/core/protobuf/tpu/ |
D | compile_metadata.proto | 78 // replica here. Reconsider when replicated model-parallelism is implemented 119 // Enables use of XLA collectives for broadcast of replicated parameters to
|
/external/libavc/common/arm/ |
D | ih264_padding_neon.s | 51 @* The top row of a 2d array is replicated for pad_size times at the top 120 @* The left column of a 2d array is replicated for pad_size times at the left 256 @* The left column of a 2d array is replicated for pad_size times at the left 384 @* The right column of a 2d array is replicated for pad_size times at the right 530 @* The right column of a 2d array is replicated for pad_size times at the right
|
/external/tensorflow/tensorflow/compiler/mlir/tensorflow/transforms/ |
D | tpu_rewrite_pass.cc | 506 const bool replicated = tpu_devices.size() != 1; in BuildParallelExecuteOp() local 529 std::string device = replicated in BuildParallelExecuteOp() 547 const bool replicated = tpu_devices.size() != 1; in AssignDevicesToReplicatedExecute() local 550 std::string device = replicated ? tensorflow::GetDeviceAliasForLogicalCore(0) in AssignDevicesToReplicatedExecute()
|
D | tf_passes.td | 139 inputs and outputs to and from a replicated TPU computation. The number of times 140 a TPU computation is replicated is defined in the `tf.TPUReplicateMetadata` op 150 computation is replicated (`num_replicas` > 1), the `num_replicas` attribute is 158 For example, the following non replicated computation: 184 The following replicated computation: 506 let summary = "Reorder replicated and partitioned input ops."; 609 For example, a non replicated `tf_device.cluster_func`: 638 A replicated `tf_device.cluster_func`: 672 A non replicated `tf_device.cluster_func` with the model parallelism:
|
/external/llvm/test/CodeGen/SystemZ/ |
D | vec-const-13.ll | 139 ; ...and again with the lower bits of the replicated constant. 161 ; ...and again with the lower bits of the replicated constant.
|
/external/javaparser/javaparser-core-testing-bdd/src/test/resources/com/github/javaparser/ |
D | visitor_scenarios.story | 1 Scenario: A class that is replicated using a CloneVisitor should be equal to the source
|