Home
last modified time | relevance | path

Searched refs:embedding (Results 1 – 25 of 191) sorted by relevance

12345678

/external/tensorflow/tensorflow/core/protobuf/tpu/
Dtpu_embedding_configuration.proto9 // Description of the various embedding tables.
15 // The embedding dimension (i.e., the width of the embedding table).
19 // Details of the learning algorithm used to update the embedding
25 // Mode. Should the embedding layer program be run for inference (just forward
35 // Number of samples in each batch of embedding layer activations sent to
45 // Sharding strategy of the embedding tables among the hosts.
71 // backward pass on the sparse core is executed only after the embedding
80 // core allowing it to process step N+1 while the embedding gradients for step
83 // is complete. The drawback is that embedding activations for step N+1 do not
84 // observe the embedding gradient updates from step N. This could affect model
[all …]
Doptimization_parameters.proto18 // to the TPU embedding program, a tag must be specified for the learning
22 // must be less than or equal to the number of tables in the TPU embedding
28 // embedding configuration, i.e. a tag cannot be skipped if a different tag
34 // embedding layer would be more optimal if the number_of_unique_tags is as
38 // communicate dynamic learning rates to the TPU embedding program.
89 // the normal version of Adam that updates all parameters in the embedding
202 // Frequency above which an embedding ID is classified as hot. The valid
203 // range for the frequency is [0.0, 1.0]. The frequency of an embedding ID is
205 // number of lookups for the embedding table.
208 // The maximum number of hot IDs for the embedding table. If greater than
[all …]
/external/tensorflow/tensorflow/core/api_def/base_api/
Dapi_def_RecvTPUEmbeddingActivations.pbtxt7 A TensorList of embedding activations containing one Tensor per
8 embedding table in the model.
15 embedding tables in the model.
24 summary: "An op that receives embedding activations on the TPU."
26 The TPU system performs the embedding lookups and aggregations specified by
Dapi_def_SendTPUEmbeddingGradients.pbtxt7 A TensorList of gradients with which to update embedding tables.
10 with respect to the embedding activations. The embedding tables are updated
11 from these gradients via the optimizer specified in the TPU embedding
32 summary: "Performs gradient updates of embedding tables."
Dapi_def_RetrieveTPUEmbeddingStochasticGradientDescentParameters.pbtxt10 summary: "Retrieve SGD embedding parameters."
12 An op that retrieves optimization parameters from embedding to host
14 the correct embedding table configuration. For example, this op is
Dapi_def_LoadTPUEmbeddingStochasticGradientDescentParameters.pbtxt10 summary: "Load SGD embedding parameters."
12 An op that loads optimization parameters into HBM for embedding. Must be
14 embedding table configuration. For example, this op is used to install
Dapi_def_RetrieveTPUEmbeddingMomentumParameters.pbtxt16 summary: "Retrieve Momentum embedding parameters."
18 An op that retrieves optimization parameters from embedding to host
20 the correct embedding table configuration. For example, this op is
Dapi_def_RetrieveTPUEmbeddingAdagradParameters.pbtxt16 summary: "Retrieve Adagrad embedding parameters."
18 An op that retrieves optimization parameters from embedding to host
20 the correct embedding table configuration. For example, this op is
Dapi_def_RetrieveTPUEmbeddingProximalAdagradParameters.pbtxt16 summary: "Retrieve proximal Adagrad embedding parameters."
18 An op that retrieves optimization parameters from embedding to host
20 the correct embedding table configuration. For example, this op is
Dapi_def_LoadTPUEmbeddingProximalAdagradParameters.pbtxt16 summary: "Load proximal Adagrad embedding parameters."
18 An op that loads optimization parameters into HBM for embedding. Must be
20 embedding table configuration. For example, this op is used to install
Dapi_def_LoadTPUEmbeddingAdagradParameters.pbtxt16 summary: "Load Adagrad embedding parameters."
18 An op that loads optimization parameters into HBM for embedding. Must be
20 embedding table configuration. For example, this op is used to install
Dapi_def_RetrieveTPUEmbeddingADAMParameters.pbtxt22 summary: "Retrieve ADAM embedding parameters."
24 An op that retrieves optimization parameters from embedding to host
26 the correct embedding table configuration. For example, this op is
Dapi_def_RetrieveTPUEmbeddingFTRLParameters.pbtxt22 summary: "Retrieve FTRL embedding parameters."
24 An op that retrieves optimization parameters from embedding to host
26 the correct embedding table configuration. For example, this op is
Dapi_def_RetrieveTPUEmbeddingAdadeltaParameters.pbtxt22 summary: "Retrieve Adadelta embedding parameters."
24 An op that retrieves optimization parameters from embedding to host
26 the correct embedding table configuration. For example, this op is
Dapi_def_RetrieveTPUEmbeddingRMSPropParameters.pbtxt22 summary: "Retrieve RMSProp embedding parameters."
24 An op that retrieves optimization parameters from embedding to host
26 the correct embedding table configuration. For example, this op is
Dapi_def_LoadTPUEmbeddingMomentumParameters.pbtxt16 summary: "Load Momentum embedding parameters."
18 An op that loads optimization parameters into HBM for embedding. Must be
20 embedding table configuration. For example, this op is used to install
Dapi_def_EnqueueTPUEmbeddingSparseTensorBatch.pbtxt15 A list of rank 1 Tensors, indices into the embedding tables.
46 A list of string scalars, one for each embedding table that specify
47 how to normalize the embedding activations after weighted summation.
57 A list of integers specifying the identifier of the embedding table
67 to the ith feature. table_ids[i] indicates which embedding table to look up ith
Dapi_def_RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug.pbtxt22 summary: "Retrieve proximal Adagrad embedding parameters with debug support."
24 An op that retrieves optimization parameters from embedding to host
26 the correct embedding table configuration. For example, this op is
Dapi_def_RetrieveTPUEmbeddingMomentumParametersGradAccumDebug.pbtxt22 summary: "Retrieve Momentum embedding parameters with debug support."
24 An op that retrieves optimization parameters from embedding to host
26 the correct embedding table configuration. For example, this op is
Dapi_def_RetrieveTPUEmbeddingAdagradParametersGradAccumDebug.pbtxt22 summary: "Retrieve Adagrad embedding parameters with debug support."
24 An op that retrieves optimization parameters from embedding to host
26 the correct embedding table configuration. For example, this op is
Dapi_def_LoadTPUEmbeddingADAMParameters.pbtxt22 summary: "Load ADAM embedding parameters."
24 An op that loads optimization parameters into HBM for embedding. Must be
26 embedding table configuration. For example, this op is used to install
Dapi_def_LoadTPUEmbeddingAdadeltaParameters.pbtxt22 summary: "Load Adadelta embedding parameters."
24 An op that loads optimization parameters into HBM for embedding. Must be
26 embedding table configuration. For example, this op is used to install
/external/u-boot/drivers/spi/
DKconfig25 access the SPI NOR flash on platforms embedding this Altera
33 used to access the SPI flash on AE3XX and AE250 platforms embedding
58 access the SPI NOR flash on platforms embedding this Broadcom
66 access the SPI NOR flash on platforms embedding these Broadcom
73 used to access the SPI NOR flash on platforms embedding this
80 access the SPI NOR flash on platforms embedding this Designware
87 access the SPI NOR flash on platforms embedding this Samsung
94 access the SPI NOR flash and SPI Data flash on platforms embedding
102 access the SPI NOR flash on platforms embedding this Intel
110 used to access the SPI NOR flash on platforms embedding this
[all …]
/external/tensorflow/tensorflow/python/kernel_tests/
Dembedding_ops_test.py260 embedding = embedding_ops.embedding_lookup(p, ids)
262 tf_result = embedding.eval(feed_dict=feed_dict)
265 self.assertShapeEqual(np_result, embedding)
273 embedding = embedding_ops.embedding_lookup(
276 self.assertAllEqual(embedding.eval(), [[1.0]])
284 embedding = embedding_ops.embedding_lookup(
290 self.assertAllEqual(embedding.eval(), 2 * self.evaluate(normalized))
303 embedding = embedding_ops.embedding_lookup(p_variable, ids)
309 tf_result = embedding.eval(feed_dict=feed_dict)
313 self.assertShapeEqual(np_result, embedding)
[all …]
/external/tensorflow/tensorflow/contrib/losses/python/metric_learning/
Dmetric_loss_ops_test.py99 embedding = np.random.rand(num_data, feat_dim).astype(np.float32)
108 pdist_matrix = pairwise_distance_np(embedding, squared=True)
140 embeddings=ops.convert_to_tensor(embedding),
155 embedding = np.random.rand(num_data, feat_dim).astype(np.float32)
163 pdist_matrix = pairwise_distance_np(embedding)
193 embeddings=ops.convert_to_tensor(embedding),
522 embedding, labels = blobs
523 embedding = (embedding - embedding.mean(axis=0)) / embedding.std(axis=0)
524 embedding = embedding.astype(np.float32)
525 return embedding, labels

12345678