| /external/tensorflow/tensorflow/python/keras/optimizer_v2/ |
| D | legacy_learning_rate_decay.py | 15 """Various learning rate decay functions.""" 35 """Applies exponential decay to the learning rate. 37 When training a model, it is often recommended to lower the learning rate as 39 to a provided initial learning rate. It requires a `global_step` value to 40 compute the decayed learning rate. You can just pass a TensorFlow variable 43 The function returns the decayed learning rate. It is computed as: 51 integer division and the decayed learning rate follows a staircase function. 71 The initial learning rate. 78 staircase: Boolean. If `True` decay the learning rate at discrete intervals 84 learning rate. [all …]
|
| D | learning_rate_schedule.py | 15 """Various learning rate decay functions.""" 33 """The learning rate schedule base class. 35 You can use a learning rate schedule to modulate how the learning rate 38 Several built-in learning rate schedules are available, such as 77 raise NotImplementedError("Learning rate schedule must override __call__") 81 raise NotImplementedError("Learning rate schedule must override get_config") 100 When training a model, it is often useful to lower the learning rate as 102 to an optimizer step, given a provided initial learning rate. 104 The schedule a 1-arg callable that produces a decayed learning 106 the learning rate value across different invocations of optimizer functions. [all …]
|
| D | adadelta.py | 33 adaptive learning rate per dimension to address two drawbacks: 35 - The continual decay of learning rates throughout training. 36 - The need for a manually selected global learning rate. 38 Adadelta is a more robust extension of Adagrad that adapts learning rates 40 past gradients. This way, Adadelta continues learning even when many updates 42 don't have to set an initial learning rate. In this version, the initial 43 learning rate can be set, as in most other Keras optimizers. 46 learning_rate: Initial value for the learning rate: 50 Note that `Adadelta` tends to benefit from higher initial learning rate
|
| /external/google-cloud-java/java-aiplatform/proto-google-cloud-aiplatform-v1beta1/src/main/java/com/google/cloud/aiplatform/v1beta1/ |
| D | ActiveLearningConfigOrBuilder.java | 80 * Active learning data sampling config. For every active learning labeling 93 * Active learning data sampling config. For every active learning labeling 106 * Active learning data sampling config. For every active learning labeling 118 * CMLE training config. For every active learning labeling iteration, system 119 * will train a machine learning model on CMLE. The trained model will be used 132 * CMLE training config. For every active learning labeling iteration, system 133 * will train a machine learning model on CMLE. The trained model will be used 146 * CMLE training config. For every active learning labeling iteration, system 147 * will train a machine learning model on CMLE. The trained model will be used
|
| D | ActiveLearningConfig.java | 25 * Parameters that configure the active learning pipeline. Active learning will 193 * Active learning data sampling config. For every active learning labeling 209 * Active learning data sampling config. For every active learning labeling 227 * Active learning data sampling config. For every active learning labeling 246 * CMLE training config. For every active learning labeling iteration, system 247 * will train a machine learning model on CMLE. The trained model will be used 263 * CMLE training config. For every active learning labeling iteration, system 264 * will train a machine learning model on CMLE. The trained model will be used 282 * CMLE training config. For every active learning labeling iteration, system 283 * will train a machine learning model on CMLE. The trained model will be used [all …]
|
| D | StreamingReadFeatureValuesRequestOrBuilder.java | 34 * for a machine learning model predicting user clicks on a website, an 53 * for a machine learning model predicting user clicks on a website, an 70 * IDs is 100. For example, for a machine learning model predicting user 84 * IDs is 100. For example, for a machine learning model predicting user 98 * IDs is 100. For example, for a machine learning model predicting user 113 * IDs is 100. For example, for a machine learning model predicting user
|
| /external/google-cloud-java/java-aiplatform/proto-google-cloud-aiplatform-v1/src/main/java/com/google/cloud/aiplatform/v1/ |
| D | ActiveLearningConfigOrBuilder.java | 80 * Active learning data sampling config. For every active learning labeling 93 * Active learning data sampling config. For every active learning labeling 106 * Active learning data sampling config. For every active learning labeling 118 * CMLE training config. For every active learning labeling iteration, system 119 * will train a machine learning model on CMLE. The trained model will be used 132 * CMLE training config. For every active learning labeling iteration, system 133 * will train a machine learning model on CMLE. The trained model will be used 146 * CMLE training config. For every active learning labeling iteration, system 147 * will train a machine learning model on CMLE. The trained model will be used
|
| D | ActiveLearningConfig.java | 25 * Parameters that configure the active learning pipeline. Active learning will 193 * Active learning data sampling config. For every active learning labeling 209 * Active learning data sampling config. For every active learning labeling 227 * Active learning data sampling config. For every active learning labeling 246 * CMLE training config. For every active learning labeling iteration, system 247 * will train a machine learning model on CMLE. The trained model will be used 263 * CMLE training config. For every active learning labeling iteration, system 264 * will train a machine learning model on CMLE. The trained model will be used 282 * CMLE training config. For every active learning labeling iteration, system 283 * will train a machine learning model on CMLE. The trained model will be used [all …]
|
| D | StreamingReadFeatureValuesRequestOrBuilder.java | 34 * for a machine learning model predicting user clicks on a website, an 53 * for a machine learning model predicting user clicks on a website, an 70 * IDs is 100. For example, for a machine learning model predicting user 84 * IDs is 100. For example, for a machine learning model predicting user 98 * IDs is 100. For example, for a machine learning model predicting user 113 * IDs is 100. For example, for a machine learning model predicting user
|
| /external/tensorflow/tensorflow/core/protobuf/tpu/ |
| D | optimization_parameters.proto | 37 // Dynamic learning rate specification in the TPUEmbeddingConfiguration. The 38 // actual learning rates are provided as a scalar input list to the 42 // For tables where learning rates are dynamically computed and communicated 43 // to the TPU embedding program, a tag must be specified for the learning 49 // learning rate, and specifies exactly one tag if it uses dynamic learning 57 // the same dynamic learning rate, for example, their dynamic learning rate 63 // communicate dynamic learning rates to the TPU embedding program. 65 // equal to the number of unique tags. The learning rate associated with a 71 // Source of learning rate to use. 121 // computing the effective learning rate. When update_accumulator_first is set [all …]
|
| /external/googleapis/google/cloud/aiplatform/v1beta1/ |
| D | data_labeling_job.proto | 140 // Parameters that configure the active learning pipeline. Active learning 146 // Parameters that configure the active learning pipeline. Active learning will 160 // Active learning data sampling config. For every active learning labeling 164 // CMLE training config. For every active learning labeling iteration, system 165 // will train a machine learning model on CMLE. The trained model will be used 170 // Active learning data sampling config. For every active learning labeling 203 // CMLE training config. For every active learning labeling iteration, system 204 // will train a machine learning model on CMLE. The trained model will be used
|
| /external/googleapis/google/cloud/aiplatform/v1/ |
| D | data_labeling_job.proto | 140 // Parameters that configure the active learning pipeline. Active learning 146 // Parameters that configure the active learning pipeline. Active learning will 160 // Active learning data sampling config. For every active learning labeling 164 // CMLE training config. For every active learning labeling iteration, system 165 // will train a machine learning model on CMLE. The trained model will be used 170 // Active learning data sampling config. For every active learning labeling 203 // CMLE training config. For every active learning labeling iteration, system 204 // will train a machine learning model on CMLE. The trained model will be used
|
| /external/google-cloud-java/java-aiplatform/proto-google-cloud-aiplatform-v1/src/main/proto/google/cloud/aiplatform/v1/ |
| D | data_labeling_job.proto | 140 // Parameters that configure the active learning pipeline. Active learning 146 // Parameters that configure the active learning pipeline. Active learning will 160 // Active learning data sampling config. For every active learning labeling 164 // CMLE training config. For every active learning labeling iteration, system 165 // will train a machine learning model on CMLE. The trained model will be used 170 // Active learning data sampling config. For every active learning labeling 203 // CMLE training config. For every active learning labeling iteration, system 204 // will train a machine learning model on CMLE. The trained model will be used
|
| /external/google-cloud-java/java-aiplatform/proto-google-cloud-aiplatform-v1beta1/src/main/proto/google/cloud/aiplatform/v1beta1/ |
| D | data_labeling_job.proto | 140 // Parameters that configure the active learning pipeline. Active learning 146 // Parameters that configure the active learning pipeline. Active learning will 160 // Active learning data sampling config. For every active learning labeling 164 // CMLE training config. For every active learning labeling iteration, system 165 // will train a machine learning model on CMLE. The trained model will be used 170 // Active learning data sampling config. For every active learning labeling 203 // CMLE training config. For every active learning labeling iteration, system 204 // will train a machine learning model on CMLE. The trained model will be used
|
| /external/tensorflow/tensorflow/python/tpu/ |
| D | tpu_embedding_v2_utils.py | 215 a learning rate of 0.2 while the second feature will be looked up in a table 216 that has a learning rate of 0.1. 234 learning_rate: The learning rate. It should be a floating point value or a 235 callable taking no arguments for a dynamic learning rate. 244 `weight_decay_factor` is multiplied by the current learning rate. 316 a learning rate of 0.2 while the second feature will be looked up in a table 317 that has a learning rate of 0.1. 338 learning_rate: The learning rate. It should be a floating point value or a 339 callable taking no arguments for a dynamic learning rate. 348 `weight_decay_factor` is multiplied by the current learning rate. [all …]
|
| /external/google-cloud-java/java-automl/ |
| D | .readme-partials.yaml | 6 …the latest machine learning features, simplify end-to-end journeys, and productionize models with … 9 of machine learning available to you even if you have limited knowledge 10 of machine learning. You can use AutoML to build on Google's machine 11 learning capabilities to create your own custom machine learning models
|
| D | .repo-metadata.json | 13 …learning available to you even if you have limited knowledge of machine learning. You can use Auto…
|
| /external/tensorflow/tensorflow/lite/g3doc/android/ |
| D | index.md | 3 TensorFlow Lite lets you run TensorFlow machine learning (ML) models in your 8 ## Learning roadmap {:.hide-from-toc} 53 ## Machine learning models 56 portable, more efficient machine learning model format. You can use pre-built 64 This page discusses using already-built machine learning models and does not 66 picking, modifying, building, and converting machine learning models for 132 learning models into your Android app: 143 for performing common machine learning tasks on handling visual, audio, and 201 called *accelerators*. Machine learning models can run faster on these 233 you have a machine learning model that uses ML operations that are not supported [all …]
|
| /external/tensorflow/tensorflow/core/tfrt/common/ |
| D | BUILD | 12 # copybara:uncomment "//learning/brain/experimental/dtensor/...", 13 # copybara:uncomment "//learning/brain/experimental/tfrt/...", 14 # copybara:uncomment "//learning/brain/google/xla/...", 15 # copybara:uncomment "//learning/brain/tfrc/...", 16 # copybara:uncomment "//learning/brain/tfrt/...",
|
| /external/federated-compute/ |
| D | README.md | 24 - [Federated learning comic book from Google AI](http://g.co/federated) 25 - [Federated Learning: Collaborative Machine Learning without Centralized 27 Data](https://ai.googleblog.com/2017/04/federated-learning-collaborative.html) 29 - [Towards Federated Learning at Scale: System Design](https://arxiv.org/abs/1902.01046) 50 production in Google's federated learning infrastructure. Other parts - notably,
|
| /external/apache-commons-math/src/main/java/org/apache/commons/math3/ml/neuralnet/sofm/ |
| D | KohonenUpdateAction.java | 44 * <li>α is the current <em>learning rate</em>, </li> 59 * <li>the <em>learning rate</em>, and</li> 72 /** Learning factor update function. */ 81 * @param learningFactor Learning factor update function. 105 // smaller the learning rate will become. in update() 152 * @param learningRate Learning factor. 172 * @param learningRate Learning factor. 190 * @param learningRate Current learning factor. 214 * @param learningRate Learning factor.
|
| /external/tensorflow/tensorflow/python/keras/ |
| D | optimizer_v1.py | 165 learning rate decay, and Nesterov momentum. 168 lr: float >= 0. Learning rate. 171 decay: float >= 0. Learning rate decay over each update. 236 (except the learning rate, which can be freely tuned). 239 lr: float >= 0. Learning rate. 243 decay: float >= 0. Learning rate decay over each update. 305 Adagrad is an optimizer with parameter-specific learning rates, 314 lr: float >= 0. Initial learning rate. 316 decay: float >= 0. Learning rate decay over each update. 319 - [Adaptive Subgradient Methods for Online Learning and Stochastic [all …]
|
| /external/tensorflow/tensorflow/lite/g3doc/examples/build/ |
| D | index.md | 5 Lite model format. The machine learning (ML) models you use with TensorFlow 41 with a machine learning model is limited on a mobile or edge device. Models 46 support a subset of machine learning model operations compared to 71 machine learning (ML) models for vision and natural language processing (NLP). 73 models on standard datasets. The machine learning models in the 100 the machine learning workflow. It enables tracking experiment metrics like 112 model performance well and uses less compute resources. Machine learning model
|
| /external/googleapis/google/cloud/automl/ |
| D | BUILD.bazel | 34 …learning available to you even if you have limited knowledge of machine learning. You can use Auto…
|
| /external/aws-sdk-java-v2/services/lookoutequipment/src/main/resources/codegen-resources/ |
| D | service-2.json | 107 …"documentation":"<p>Creates a machine learning model for data inference. </p> <p>A machine-learnin… 210 …"documentation":"<p>Deletes a machine learning model currently available for Amazon Lookout for Eq… 346 …ides a JSON containing the overall information about a specific machine learning model, including … 363 … "documentation":"<p>Retrieves information about a specific machine learning model version.</p>" 795 "documentation":"<p>Sets the active model version for a given machine learning model.</p>" 1024 …"documentation":"<p>The name of the previously trained machine learning model being used to create… 1185 "documentation":"<p>The name for the machine learning model to be created.</p>" 1189 … "documentation":"<p>The name of the dataset for the machine learning model being created. </p>" 1193 "documentation":"<p>The data schema for the machine learning model being created. </p>" 1197 …":"<p>The input configuration for the labels being used for the machine learning model that's bein… [all …]
|