Home
last modified time | relevance | path

Searched refs:Adagrad (Results 1 – 25 of 55) sorted by relevance

123

/external/tensorflow/tensorflow/core/api_def/base_api/
Dapi_def_RetrieveTPUEmbeddingMDLAdagradLightParameters.pbtxt7 Parameter parameters updated by the MDL Adagrad Light optimization algorithm.
13 Parameter accumulators updated by the MDL Adagrad Light optimization algorithm.
19 Parameter weights updated by the MDL Adagrad Light optimization algorithm.
25 Parameter benefits updated by the MDL Adagrad Light optimization algorithm.
28 summary: "Retrieve MDL Adagrad Light embedding parameters."
Dapi_def_LoadTPUEmbeddingMDLAdagradLightParameters.pbtxt7 Value of parameters used in the MDL Adagrad Light optimization algorithm.
13 Value of accumulators used in the MDL Adagrad Light optimization algorithm.
19 Value of weights used in the MDL Adagrad Light optimization algorithm.
25 Value of benefits used in the MDL Adagrad Light optimization algorithm.
28 summary: "Load MDL Adagrad Light embedding parameters."
Dapi_def_RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug.pbtxt7 Parameter parameters updated by the proximal Adagrad optimization algorithm.
13 Parameter accumulators updated by the proximal Adagrad optimization algorithm.
19 Parameter gradient_accumulators updated by the proximal Adagrad optimization algorithm.
22 summary: "Retrieve proximal Adagrad embedding parameters with debug support."
Dapi_def_RetrieveTPUEmbeddingAdagradParametersGradAccumDebug.pbtxt7 Parameter parameters updated by the Adagrad optimization algorithm.
13 Parameter accumulators updated by the Adagrad optimization algorithm.
19 Parameter gradient_accumulators updated by the Adagrad optimization algorithm.
22 summary: "Retrieve Adagrad embedding parameters with debug support."
Dapi_def_LoadTPUEmbeddingAdagradParametersGradAccumDebug.pbtxt7 Value of parameters used in the Adagrad optimization algorithm.
13 Value of accumulators used in the Adagrad optimization algorithm.
19 Value of gradient_accumulators used in the Adagrad optimization algorithm.
22 summary: "Load Adagrad embedding parameters with debug support."
Dapi_def_LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug.pbtxt7 Value of parameters used in the proximal Adagrad optimization algorithm.
13 Value of accumulators used in the proximal Adagrad optimization algorithm.
19 Value of gradient_accumulators used in the proximal Adagrad optimization algorithm.
22 summary: "Load proximal Adagrad embedding parameters with debug support."
Dapi_def_RetrieveTPUEmbeddingAdagradParameters.pbtxt7 Parameter parameters updated by the Adagrad optimization algorithm.
13 Parameter accumulators updated by the Adagrad optimization algorithm.
16 summary: "Retrieve Adagrad embedding parameters."
Dapi_def_RetrieveTPUEmbeddingProximalAdagradParameters.pbtxt7 Parameter parameters updated by the proximal Adagrad optimization algorithm.
13 Parameter accumulators updated by the proximal Adagrad optimization algorithm.
16 summary: "Retrieve proximal Adagrad embedding parameters."
Dapi_def_LoadTPUEmbeddingAdagradParameters.pbtxt7 Value of parameters used in the Adagrad optimization algorithm.
13 Value of accumulators used in the Adagrad optimization algorithm.
16 summary: "Load Adagrad embedding parameters."
Dapi_def_LoadTPUEmbeddingProximalAdagradParameters.pbtxt7 Value of parameters used in the proximal Adagrad optimization algorithm.
13 Value of accumulators used in the proximal Adagrad optimization algorithm.
16 summary: "Load proximal Adagrad embedding parameters."
Dapi_def_ResourceApplyProximalAdagrad.pbtxt46 summary: "Update \'*var\' and \'*accum\' according to FOBOS with Adagrad learning rate."
/external/tensorflow/tensorflow/python/keras/optimizer_v2/
Dadagrad_test.py89 ada_opt = adagrad.Adagrad(learning_rate)
139 ada_opt = adagrad.Adagrad(learning_rate, decay=decay)
180 ada_opt = adagrad.Adagrad(learning_rate, epsilon=1.0)
223 ada_opt = adagrad.Adagrad(lr_schedule)
263 sgd_op = adagrad.Adagrad(1.0).minimize(loss, var_list=[var0])
289 ada_opt = adagrad.Adagrad(learning_rate)
328 ada_opt = adagrad.Adagrad(learning_rate)
366 ada_opt = adagrad.Adagrad(learning_rate, epsilon=1.)
404 repeated_update = adagrad.Adagrad(3.0).apply_gradients([
407 aggregated_update = adagrad.Adagrad(3.0).apply_gradients([
[all …]
Dadagrad.py34 class Adagrad(optimizer_v2.OptimizerV2): class
73 super(Adagrad, self).__init__(name, **kwargs)
87 super(Adagrad, self)._prepare_local(var_device, var_dtype, apply_state)
102 super(Adagrad, self).set_weights(weights)
157 config = super(Adagrad, self).get_config()
/external/tensorflow/tensorflow/python/tpu/
Dtpu_embedding_v2_utils_test.py31 @parameterized.parameters(tpu_embedding_v2_utils.Adagrad,
40 tpu_embedding_v2_utils.Adagrad,
48 tpu_embedding_v2_utils.Adagrad,
56 tpu_embedding_v2_utils.Adagrad,
Dtpu_embedding_v2_utils.py270 class Adagrad(_Optimizer): class
353 super(Adagrad, self).__init__(
370 super(Adagrad, self)._set_optimization_parameters(parameters)
/external/tensorflow/tensorflow/tools/api/golden/v1/
Dtensorflow.tpu.experimental.embedding.-adagrad.pbtxt1 path: "tensorflow.tpu.experimental.embedding.Adagrad"
3 is_instance: "<class \'tensorflow.python.tpu.tpu_embedding_v2_utils.Adagrad\'>"
Dtensorflow.keras.optimizers.-adagrad.pbtxt1 path: "tensorflow.keras.optimizers.Adagrad"
3 is_instance: "<class \'tensorflow.python.keras.optimizer_v2.adagrad.Adagrad\'>"
29 … \'name\'], varargs=None, keywords=kwargs, defaults=[\'0.001\', \'0.1\', \'1e-07\', \'Adagrad\'], "
Dtensorflow.tpu.experimental.embedding.pbtxt4 name: "Adagrad"
Dtensorflow.keras.optimizers.pbtxt8 name: "Adagrad"
/external/tensorflow/tensorflow/tools/api/golden/v2/
Dtensorflow.tpu.experimental.embedding.-adagrad.pbtxt1 path: "tensorflow.tpu.experimental.embedding.Adagrad"
3 is_instance: "<class \'tensorflow.python.tpu.tpu_embedding_v2_utils.Adagrad\'>"
Dtensorflow.keras.optimizers.-adagrad.pbtxt1 path: "tensorflow.keras.optimizers.Adagrad"
3 is_instance: "<class \'tensorflow.python.keras.optimizer_v2.adagrad.Adagrad\'>"
29 … \'name\'], varargs=None, keywords=kwargs, defaults=[\'0.001\', \'0.1\', \'1e-07\', \'Adagrad\'], "
Dtensorflow.optimizers.-adagrad.pbtxt1 path: "tensorflow.optimizers.Adagrad"
3 is_instance: "<class \'tensorflow.python.keras.optimizer_v2.adagrad.Adagrad\'>"
29 … \'name\'], varargs=None, keywords=kwargs, defaults=[\'0.001\', \'0.1\', \'1e-07\', \'Adagrad\'], "
Dtensorflow.tpu.experimental.embedding.pbtxt4 name: "Adagrad"
Dtensorflow.optimizers.pbtxt8 name: "Adagrad"
Dtensorflow.keras.optimizers.pbtxt8 name: "Adagrad"

123