1<html><body> 2<style> 3 4body, h1, h2, h3, div, span, p, pre, a { 5 margin: 0; 6 padding: 0; 7 border: 0; 8 font-weight: inherit; 9 font-style: inherit; 10 font-size: 100%; 11 font-family: inherit; 12 vertical-align: baseline; 13} 14 15body { 16 font-size: 13px; 17 padding: 1em; 18} 19 20h1 { 21 font-size: 26px; 22 margin-bottom: 1em; 23} 24 25h2 { 26 font-size: 24px; 27 margin-bottom: 1em; 28} 29 30h3 { 31 font-size: 20px; 32 margin-bottom: 1em; 33 margin-top: 1em; 34} 35 36pre, code { 37 line-height: 1.5; 38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace; 39} 40 41pre { 42 margin-top: 0.5em; 43} 44 45h1, h2, h3, p { 46 font-family: Arial, sans serif; 47} 48 49h1, h2, h3 { 50 border-bottom: solid #CCC 1px; 51} 52 53.toc_element { 54 margin-top: 0.5em; 55} 56 57.firstline { 58 margin-left: 2 em; 59} 60 61.method { 62 margin-top: 1em; 63 border: solid 1px #CCC; 64 padding: 1em; 65 background: #EEE; 66} 67 68.details { 69 font-weight: bold; 70 font-size: 14px; 71} 72 73</style> 74 75<h1><a href="ml_v1.html">Cloud Machine Learning Engine</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.models.html">models</a></h1> 76<h2>Instance Methods</h2> 77<p class="toc_element"> 78 <code><a href="ml_v1.projects.models.versions.html">versions()</a></code> 79</p> 80<p class="firstline">Returns the versions Resource.</p> 81 82<p class="toc_element"> 83 <code><a href="#create">create(parent, body, x__xgafv=None)</a></code></p> 84<p class="firstline">Creates a model which will later contain one or more versions.</p> 85<p class="toc_element"> 86 <code><a href="#delete">delete(name, x__xgafv=None)</a></code></p> 87<p class="firstline">Deletes a model.</p> 88<p class="toc_element"> 89 <code><a href="#get">get(name, x__xgafv=None)</a></code></p> 90<p class="firstline">Gets information about a model, including its name, the description (if</p> 91<p class="toc_element"> 92 <code><a href="#getIamPolicy">getIamPolicy(resource, x__xgafv=None)</a></code></p> 93<p class="firstline">Gets the access control policy for a resource.</p> 94<p class="toc_element"> 95 <code><a href="#list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</a></code></p> 96<p class="firstline">Lists the models in a project.</p> 97<p class="toc_element"> 98 <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p> 99<p class="firstline">Retrieves the next page of results.</p> 100<p class="toc_element"> 101 <code><a href="#patch">patch(name, body, updateMask=None, x__xgafv=None)</a></code></p> 102<p class="firstline">Updates a specific model resource.</p> 103<p class="toc_element"> 104 <code><a href="#setIamPolicy">setIamPolicy(resource, body, x__xgafv=None)</a></code></p> 105<p class="firstline">Sets the access control policy on the specified resource. Replaces any</p> 106<p class="toc_element"> 107 <code><a href="#testIamPermissions">testIamPermissions(resource, body, x__xgafv=None)</a></code></p> 108<p class="firstline">Returns permissions that a caller has on the specified resource.</p> 109<h3>Method Details</h3> 110<div class="method"> 111 <code class="details" id="create">create(parent, body, x__xgafv=None)</code> 112 <pre>Creates a model which will later contain one or more versions. 113 114You must add at least one version before you can request predictions from 115the model. Add versions by calling 116[projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create). 117 118Args: 119 parent: string, Required. The project name. (required) 120 body: object, The request body. (required) 121 The object takes the form of: 122 123{ # Represents a machine learning solution. 124 # 125 # A model can have multiple versions, each of which is a deployed, trained 126 # model ready to receive prediction requests. The model itself is just a 127 # container. 128 "description": "A String", # Optional. The description specified for the model when it was created. 129 "onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout` 130 # streams to Stackdriver Logging. These can be more verbose than the standard 131 # access logs (see `onlinePredictionLogging`) and can incur higher cost. 132 # However, they are helpful for debugging. Note that 133 # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if 134 # your project receives prediction requests at a high QPS. Estimate your 135 # costs before enabling this option. 136 # 137 # Default is false. 138 "labels": { # Optional. One or more labels that you can add, to organize your models. 139 # Each label is a key-value pair, where both the key and the value are 140 # arbitrary strings that you supply. 141 # For more information, see the documentation on 142 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. 143 "a_key": "A String", 144 }, 145 "regions": [ # Optional. The list of regions where the model is going to be deployed. 146 # Currently only one region per model is supported. 147 # Defaults to 'us-central1' if nothing is set. 148 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a> 149 # for AI Platform services. 150 # Note: 151 # * No matter where a model is deployed, it can always be accessed by 152 # users from anywhere, both for online and batch prediction. 153 # * The region for a batch prediction job is set by the region field when 154 # submitting the batch prediction job and does not take its value from 155 # this field. 156 "A String", 157 ], 158 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help 159 # prevent simultaneous updates of a model from overwriting each other. 160 # It is strongly suggested that systems make use of the `etag` in the 161 # read-modify-write cycle to perform model updates in order to avoid race 162 # conditions: An `etag` is returned in the response to `GetModel`, and 163 # systems are expected to put that etag in the request to `UpdateModel` to 164 # ensure that their change will be applied to the model as intended. 165 "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to 166 # handle prediction requests that do not specify a version. 167 # 168 # You can change the default version by calling 169 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 170 # 171 # Each version is a trained model deployed in the cloud, ready to handle 172 # prediction requests. A model can have multiple versions. You can get 173 # information about all of the versions of a given model by calling 174 # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). 175 "errorMessage": "A String", # Output only. The details of a failure or a cancellation. 176 "labels": { # Optional. One or more labels that you can add, to organize your model 177 # versions. Each label is a key-value pair, where both the key and the value 178 # are arbitrary strings that you supply. 179 # For more information, see the documentation on 180 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. 181 "a_key": "A String", 182 }, 183 "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only 184 # applies to online prediction service. 185 # <dl> 186 # <dt>mls1-c1-m2</dt> 187 # <dd> 188 # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated 189 # name for this machine type is "mls1-highmem-1". 190 # </dd> 191 # <dt>mls1-c4-m2</dt> 192 # <dd> 193 # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The 194 # deprecated name for this machine type is "mls1-highcpu-4". 195 # </dd> 196 # </dl> 197 "description": "A String", # Optional. The description specified for the version when it was created. 198 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment. 199 # If not set, AI Platform uses the default stable version, 1.0. For more 200 # information, see the 201 # [runtime version list](/ml-engine/docs/runtime-version-list) and 202 # [how to manage runtime versions](/ml-engine/docs/versioning). 203 "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the 204 # model. You should generally use `auto_scaling` with an appropriate 205 # `min_nodes` instead, but this option is available if you want more 206 # predictable billing. Beware that latency and error rates will increase 207 # if the traffic exceeds that capability of the system to serve it based 208 # on the selected number of nodes. 209 "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, 210 # starting from the time the model is deployed, so the cost of operating 211 # this model will be proportional to `nodes` * number of hours since 212 # last billing cycle plus the cost for each prediction performed. 213 }, 214 "predictionClass": "A String", # Optional. The fully qualified name 215 # (<var>module_name</var>.<var>class_name</var>) of a class that implements 216 # the Predictor interface described in this reference field. The module 217 # containing this class should be included in a package provided to the 218 # [`packageUris` field](#Version.FIELDS.package_uris). 219 # 220 # Specify this field if and only if you are deploying a [custom prediction 221 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines). 222 # If you specify this field, you must set 223 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. 224 # 225 # The following code sample provides the Predictor interface: 226 # 227 # ```py 228 # class Predictor(object): 229 # """Interface for constructing custom predictors.""" 230 # 231 # def predict(self, instances, **kwargs): 232 # """Performs custom prediction. 233 # 234 # Instances are the decoded values from the request. They have already 235 # been deserialized from JSON. 236 # 237 # Args: 238 # instances: A list of prediction input instances. 239 # **kwargs: A dictionary of keyword args provided as additional 240 # fields on the predict request body. 241 # 242 # Returns: 243 # A list of outputs containing the prediction results. This list must 244 # be JSON serializable. 245 # """ 246 # raise NotImplementedError() 247 # 248 # @classmethod 249 # def from_path(cls, model_dir): 250 # """Creates an instance of Predictor using the given path. 251 # 252 # Loading of the predictor should be done in this method. 253 # 254 # Args: 255 # model_dir: The local directory that contains the exported model 256 # file along with any additional files uploaded when creating the 257 # version resource. 258 # 259 # Returns: 260 # An instance implementing this Predictor class. 261 # """ 262 # raise NotImplementedError() 263 # ``` 264 # 265 # Learn more about [the Predictor interface and custom prediction 266 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines). 267 "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in 268 # response to increases and decreases in traffic. Care should be 269 # taken to ramp up traffic according to the model's ability to scale 270 # or you will start seeing increases in latency and 429 response codes. 271 "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These 272 # nodes are always up, starting from the time the model is deployed. 273 # Therefore, the cost of operating this model will be at least 274 # `rate` * `min_nodes` * number of hours since last billing cycle, 275 # where `rate` is the cost per node-hour as documented in the 276 # [pricing guide](/ml-engine/docs/pricing), 277 # even if no predictions are performed. There is additional cost for each 278 # prediction performed. 279 # 280 # Unlike manual scaling, if the load gets too heavy for the nodes 281 # that are up, the service will automatically add nodes to handle the 282 # increased load as well as scale back as traffic drops, always maintaining 283 # at least `min_nodes`. You will be charged for the time in which additional 284 # nodes are used. 285 # 286 # If not specified, `min_nodes` defaults to 0, in which case, when traffic 287 # to a model stops (and after a cool-down period), nodes will be shut down 288 # and no charges will be incurred until traffic to the model resumes. 289 # 290 # You can set `min_nodes` when creating the model version, and you can also 291 # update `min_nodes` for an existing version: 292 # <pre> 293 # update_body.json: 294 # { 295 # 'autoScaling': { 296 # 'minNodes': 5 297 # } 298 # } 299 # </pre> 300 # HTTP request: 301 # <pre> 302 # PATCH 303 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes 304 # -d @./update_body.json 305 # </pre> 306 }, 307 "serviceAccount": "A String", # Optional. Specifies the service account for resource access control. 308 "state": "A String", # Output only. The state of a version. 309 "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default 310 # version is '2.7'. Python '3.5' is available when `runtime_version` is set 311 # to '1.4' and above. Python '2.7' works with all supported runtime versions. 312 "framework": "A String", # Optional. The machine learning framework AI Platform uses to train 313 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`, 314 # `XGBOOST`. If you do not specify a framework, AI Platform 315 # will analyze files in the deployment_uri to determine a framework. If you 316 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version 317 # of the model to 1.4 or greater. 318 # 319 # Do **not** specify a framework if you're deploying a [custom 320 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines). 321 "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom 322 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines) 323 # or [scikit-learn pipelines with custom 324 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code). 325 # 326 # For a custom prediction routine, one of these packages must contain your 327 # Predictor class (see 328 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally, 329 # include any dependencies used by your Predictor or scikit-learn pipeline 330 # uses that are not already included in your selected [runtime 331 # version](/ml-engine/docs/tensorflow/runtime-version-list). 332 # 333 # If you specify this field, you must also set 334 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. 335 "A String", 336 ], 337 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help 338 # prevent simultaneous updates of a model from overwriting each other. 339 # It is strongly suggested that systems make use of the `etag` in the 340 # read-modify-write cycle to perform model updates in order to avoid race 341 # conditions: An `etag` is returned in the response to `GetVersion`, and 342 # systems are expected to put that etag in the request to `UpdateVersion` to 343 # ensure that their change will be applied to the model as intended. 344 "lastUseTime": "A String", # Output only. The time the version was last used for prediction. 345 "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to 346 # create the version. See the 347 # [guide to model 348 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more 349 # information. 350 # 351 # When passing Version to 352 # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create) 353 # the model service uses the specified location as the source of the model. 354 # Once deployed, the model version is hosted by the prediction service, so 355 # this location is useful only as a historical record. 356 # The total number of model files can't exceed 1000. 357 "createTime": "A String", # Output only. The time the version was created. 358 "isDefault": True or False, # Output only. If true, this version will be used to handle prediction 359 # requests that do not specify a version. 360 # 361 # You can change the default version by calling 362 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 363 "name": "A String", # Required.The name specified for the version when it was created. 364 # 365 # The version name must be unique within the model it is created in. 366 }, 367 "onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver 368 # Logging. These logs are like standard server access logs, containing 369 # information like timestamp and latency for each request. Note that 370 # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if 371 # your project receives prediction requests at a high queries per second rate 372 # (QPS). Estimate your costs before enabling this option. 373 # 374 # Default is false. 375 "name": "A String", # Required. The name specified for the model when it was created. 376 # 377 # The model name must be unique within the project it is created in. 378} 379 380 x__xgafv: string, V1 error format. 381 Allowed values 382 1 - v1 error format 383 2 - v2 error format 384 385Returns: 386 An object of the form: 387 388 { # Represents a machine learning solution. 389 # 390 # A model can have multiple versions, each of which is a deployed, trained 391 # model ready to receive prediction requests. The model itself is just a 392 # container. 393 "description": "A String", # Optional. The description specified for the model when it was created. 394 "onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout` 395 # streams to Stackdriver Logging. These can be more verbose than the standard 396 # access logs (see `onlinePredictionLogging`) and can incur higher cost. 397 # However, they are helpful for debugging. Note that 398 # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if 399 # your project receives prediction requests at a high QPS. Estimate your 400 # costs before enabling this option. 401 # 402 # Default is false. 403 "labels": { # Optional. One or more labels that you can add, to organize your models. 404 # Each label is a key-value pair, where both the key and the value are 405 # arbitrary strings that you supply. 406 # For more information, see the documentation on 407 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. 408 "a_key": "A String", 409 }, 410 "regions": [ # Optional. The list of regions where the model is going to be deployed. 411 # Currently only one region per model is supported. 412 # Defaults to 'us-central1' if nothing is set. 413 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a> 414 # for AI Platform services. 415 # Note: 416 # * No matter where a model is deployed, it can always be accessed by 417 # users from anywhere, both for online and batch prediction. 418 # * The region for a batch prediction job is set by the region field when 419 # submitting the batch prediction job and does not take its value from 420 # this field. 421 "A String", 422 ], 423 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help 424 # prevent simultaneous updates of a model from overwriting each other. 425 # It is strongly suggested that systems make use of the `etag` in the 426 # read-modify-write cycle to perform model updates in order to avoid race 427 # conditions: An `etag` is returned in the response to `GetModel`, and 428 # systems are expected to put that etag in the request to `UpdateModel` to 429 # ensure that their change will be applied to the model as intended. 430 "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to 431 # handle prediction requests that do not specify a version. 432 # 433 # You can change the default version by calling 434 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 435 # 436 # Each version is a trained model deployed in the cloud, ready to handle 437 # prediction requests. A model can have multiple versions. You can get 438 # information about all of the versions of a given model by calling 439 # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). 440 "errorMessage": "A String", # Output only. The details of a failure or a cancellation. 441 "labels": { # Optional. One or more labels that you can add, to organize your model 442 # versions. Each label is a key-value pair, where both the key and the value 443 # are arbitrary strings that you supply. 444 # For more information, see the documentation on 445 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. 446 "a_key": "A String", 447 }, 448 "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only 449 # applies to online prediction service. 450 # <dl> 451 # <dt>mls1-c1-m2</dt> 452 # <dd> 453 # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated 454 # name for this machine type is "mls1-highmem-1". 455 # </dd> 456 # <dt>mls1-c4-m2</dt> 457 # <dd> 458 # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The 459 # deprecated name for this machine type is "mls1-highcpu-4". 460 # </dd> 461 # </dl> 462 "description": "A String", # Optional. The description specified for the version when it was created. 463 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment. 464 # If not set, AI Platform uses the default stable version, 1.0. For more 465 # information, see the 466 # [runtime version list](/ml-engine/docs/runtime-version-list) and 467 # [how to manage runtime versions](/ml-engine/docs/versioning). 468 "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the 469 # model. You should generally use `auto_scaling` with an appropriate 470 # `min_nodes` instead, but this option is available if you want more 471 # predictable billing. Beware that latency and error rates will increase 472 # if the traffic exceeds that capability of the system to serve it based 473 # on the selected number of nodes. 474 "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, 475 # starting from the time the model is deployed, so the cost of operating 476 # this model will be proportional to `nodes` * number of hours since 477 # last billing cycle plus the cost for each prediction performed. 478 }, 479 "predictionClass": "A String", # Optional. The fully qualified name 480 # (<var>module_name</var>.<var>class_name</var>) of a class that implements 481 # the Predictor interface described in this reference field. The module 482 # containing this class should be included in a package provided to the 483 # [`packageUris` field](#Version.FIELDS.package_uris). 484 # 485 # Specify this field if and only if you are deploying a [custom prediction 486 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines). 487 # If you specify this field, you must set 488 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. 489 # 490 # The following code sample provides the Predictor interface: 491 # 492 # ```py 493 # class Predictor(object): 494 # """Interface for constructing custom predictors.""" 495 # 496 # def predict(self, instances, **kwargs): 497 # """Performs custom prediction. 498 # 499 # Instances are the decoded values from the request. They have already 500 # been deserialized from JSON. 501 # 502 # Args: 503 # instances: A list of prediction input instances. 504 # **kwargs: A dictionary of keyword args provided as additional 505 # fields on the predict request body. 506 # 507 # Returns: 508 # A list of outputs containing the prediction results. This list must 509 # be JSON serializable. 510 # """ 511 # raise NotImplementedError() 512 # 513 # @classmethod 514 # def from_path(cls, model_dir): 515 # """Creates an instance of Predictor using the given path. 516 # 517 # Loading of the predictor should be done in this method. 518 # 519 # Args: 520 # model_dir: The local directory that contains the exported model 521 # file along with any additional files uploaded when creating the 522 # version resource. 523 # 524 # Returns: 525 # An instance implementing this Predictor class. 526 # """ 527 # raise NotImplementedError() 528 # ``` 529 # 530 # Learn more about [the Predictor interface and custom prediction 531 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines). 532 "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in 533 # response to increases and decreases in traffic. Care should be 534 # taken to ramp up traffic according to the model's ability to scale 535 # or you will start seeing increases in latency and 429 response codes. 536 "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These 537 # nodes are always up, starting from the time the model is deployed. 538 # Therefore, the cost of operating this model will be at least 539 # `rate` * `min_nodes` * number of hours since last billing cycle, 540 # where `rate` is the cost per node-hour as documented in the 541 # [pricing guide](/ml-engine/docs/pricing), 542 # even if no predictions are performed. There is additional cost for each 543 # prediction performed. 544 # 545 # Unlike manual scaling, if the load gets too heavy for the nodes 546 # that are up, the service will automatically add nodes to handle the 547 # increased load as well as scale back as traffic drops, always maintaining 548 # at least `min_nodes`. You will be charged for the time in which additional 549 # nodes are used. 550 # 551 # If not specified, `min_nodes` defaults to 0, in which case, when traffic 552 # to a model stops (and after a cool-down period), nodes will be shut down 553 # and no charges will be incurred until traffic to the model resumes. 554 # 555 # You can set `min_nodes` when creating the model version, and you can also 556 # update `min_nodes` for an existing version: 557 # <pre> 558 # update_body.json: 559 # { 560 # 'autoScaling': { 561 # 'minNodes': 5 562 # } 563 # } 564 # </pre> 565 # HTTP request: 566 # <pre> 567 # PATCH 568 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes 569 # -d @./update_body.json 570 # </pre> 571 }, 572 "serviceAccount": "A String", # Optional. Specifies the service account for resource access control. 573 "state": "A String", # Output only. The state of a version. 574 "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default 575 # version is '2.7'. Python '3.5' is available when `runtime_version` is set 576 # to '1.4' and above. Python '2.7' works with all supported runtime versions. 577 "framework": "A String", # Optional. The machine learning framework AI Platform uses to train 578 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`, 579 # `XGBOOST`. If you do not specify a framework, AI Platform 580 # will analyze files in the deployment_uri to determine a framework. If you 581 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version 582 # of the model to 1.4 or greater. 583 # 584 # Do **not** specify a framework if you're deploying a [custom 585 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines). 586 "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom 587 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines) 588 # or [scikit-learn pipelines with custom 589 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code). 590 # 591 # For a custom prediction routine, one of these packages must contain your 592 # Predictor class (see 593 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally, 594 # include any dependencies used by your Predictor or scikit-learn pipeline 595 # uses that are not already included in your selected [runtime 596 # version](/ml-engine/docs/tensorflow/runtime-version-list). 597 # 598 # If you specify this field, you must also set 599 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. 600 "A String", 601 ], 602 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help 603 # prevent simultaneous updates of a model from overwriting each other. 604 # It is strongly suggested that systems make use of the `etag` in the 605 # read-modify-write cycle to perform model updates in order to avoid race 606 # conditions: An `etag` is returned in the response to `GetVersion`, and 607 # systems are expected to put that etag in the request to `UpdateVersion` to 608 # ensure that their change will be applied to the model as intended. 609 "lastUseTime": "A String", # Output only. The time the version was last used for prediction. 610 "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to 611 # create the version. See the 612 # [guide to model 613 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more 614 # information. 615 # 616 # When passing Version to 617 # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create) 618 # the model service uses the specified location as the source of the model. 619 # Once deployed, the model version is hosted by the prediction service, so 620 # this location is useful only as a historical record. 621 # The total number of model files can't exceed 1000. 622 "createTime": "A String", # Output only. The time the version was created. 623 "isDefault": True or False, # Output only. If true, this version will be used to handle prediction 624 # requests that do not specify a version. 625 # 626 # You can change the default version by calling 627 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 628 "name": "A String", # Required.The name specified for the version when it was created. 629 # 630 # The version name must be unique within the model it is created in. 631 }, 632 "onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver 633 # Logging. These logs are like standard server access logs, containing 634 # information like timestamp and latency for each request. Note that 635 # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if 636 # your project receives prediction requests at a high queries per second rate 637 # (QPS). Estimate your costs before enabling this option. 638 # 639 # Default is false. 640 "name": "A String", # Required. The name specified for the model when it was created. 641 # 642 # The model name must be unique within the project it is created in. 643 }</pre> 644</div> 645 646<div class="method"> 647 <code class="details" id="delete">delete(name, x__xgafv=None)</code> 648 <pre>Deletes a model. 649 650You can only delete a model if there are no versions in it. You can delete 651versions by calling 652[projects.models.versions.delete](/ml-engine/reference/rest/v1/projects.models.versions/delete). 653 654Args: 655 name: string, Required. The name of the model. (required) 656 x__xgafv: string, V1 error format. 657 Allowed values 658 1 - v1 error format 659 2 - v2 error format 660 661Returns: 662 An object of the form: 663 664 { # This resource represents a long-running operation that is the result of a 665 # network API call. 666 "metadata": { # Service-specific metadata associated with the operation. It typically 667 # contains progress information and common metadata such as create time. 668 # Some services might not provide such metadata. Any method that returns a 669 # long-running operation should document the metadata type, if any. 670 "a_key": "", # Properties of the object. Contains field @type with type URL. 671 }, 672 "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation. 673 # different programming environments, including REST APIs and RPC APIs. It is 674 # used by [gRPC](https://github.com/grpc). Each `Status` message contains 675 # three pieces of data: error code, error message, and error details. 676 # 677 # You can find out more about this error model and how to work with it in the 678 # [API Design Guide](https://cloud.google.com/apis/design/errors). 679 "message": "A String", # A developer-facing error message, which should be in English. Any 680 # user-facing error message should be localized and sent in the 681 # google.rpc.Status.details field, or localized by the client. 682 "code": 42, # The status code, which should be an enum value of google.rpc.Code. 683 "details": [ # A list of messages that carry the error details. There is a common set of 684 # message types for APIs to use. 685 { 686 "a_key": "", # Properties of the object. Contains field @type with type URL. 687 }, 688 ], 689 }, 690 "done": True or False, # If the value is `false`, it means the operation is still in progress. 691 # If `true`, the operation is completed, and either `error` or `response` is 692 # available. 693 "response": { # The normal response of the operation in case of success. If the original 694 # method returns no data on success, such as `Delete`, the response is 695 # `google.protobuf.Empty`. If the original method is standard 696 # `Get`/`Create`/`Update`, the response should be the resource. For other 697 # methods, the response should have the type `XxxResponse`, where `Xxx` 698 # is the original method name. For example, if the original method name 699 # is `TakeSnapshot()`, the inferred response type is 700 # `TakeSnapshotResponse`. 701 "a_key": "", # Properties of the object. Contains field @type with type URL. 702 }, 703 "name": "A String", # The server-assigned name, which is only unique within the same service that 704 # originally returns it. If you use the default HTTP mapping, the 705 # `name` should be a resource name ending with `operations/{unique_id}`. 706 }</pre> 707</div> 708 709<div class="method"> 710 <code class="details" id="get">get(name, x__xgafv=None)</code> 711 <pre>Gets information about a model, including its name, the description (if 712set), and the default version (if at least one version of the model has 713been deployed). 714 715Args: 716 name: string, Required. The name of the model. (required) 717 x__xgafv: string, V1 error format. 718 Allowed values 719 1 - v1 error format 720 2 - v2 error format 721 722Returns: 723 An object of the form: 724 725 { # Represents a machine learning solution. 726 # 727 # A model can have multiple versions, each of which is a deployed, trained 728 # model ready to receive prediction requests. The model itself is just a 729 # container. 730 "description": "A String", # Optional. The description specified for the model when it was created. 731 "onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout` 732 # streams to Stackdriver Logging. These can be more verbose than the standard 733 # access logs (see `onlinePredictionLogging`) and can incur higher cost. 734 # However, they are helpful for debugging. Note that 735 # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if 736 # your project receives prediction requests at a high QPS. Estimate your 737 # costs before enabling this option. 738 # 739 # Default is false. 740 "labels": { # Optional. One or more labels that you can add, to organize your models. 741 # Each label is a key-value pair, where both the key and the value are 742 # arbitrary strings that you supply. 743 # For more information, see the documentation on 744 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. 745 "a_key": "A String", 746 }, 747 "regions": [ # Optional. The list of regions where the model is going to be deployed. 748 # Currently only one region per model is supported. 749 # Defaults to 'us-central1' if nothing is set. 750 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a> 751 # for AI Platform services. 752 # Note: 753 # * No matter where a model is deployed, it can always be accessed by 754 # users from anywhere, both for online and batch prediction. 755 # * The region for a batch prediction job is set by the region field when 756 # submitting the batch prediction job and does not take its value from 757 # this field. 758 "A String", 759 ], 760 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help 761 # prevent simultaneous updates of a model from overwriting each other. 762 # It is strongly suggested that systems make use of the `etag` in the 763 # read-modify-write cycle to perform model updates in order to avoid race 764 # conditions: An `etag` is returned in the response to `GetModel`, and 765 # systems are expected to put that etag in the request to `UpdateModel` to 766 # ensure that their change will be applied to the model as intended. 767 "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to 768 # handle prediction requests that do not specify a version. 769 # 770 # You can change the default version by calling 771 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 772 # 773 # Each version is a trained model deployed in the cloud, ready to handle 774 # prediction requests. A model can have multiple versions. You can get 775 # information about all of the versions of a given model by calling 776 # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). 777 "errorMessage": "A String", # Output only. The details of a failure or a cancellation. 778 "labels": { # Optional. One or more labels that you can add, to organize your model 779 # versions. Each label is a key-value pair, where both the key and the value 780 # are arbitrary strings that you supply. 781 # For more information, see the documentation on 782 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. 783 "a_key": "A String", 784 }, 785 "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only 786 # applies to online prediction service. 787 # <dl> 788 # <dt>mls1-c1-m2</dt> 789 # <dd> 790 # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated 791 # name for this machine type is "mls1-highmem-1". 792 # </dd> 793 # <dt>mls1-c4-m2</dt> 794 # <dd> 795 # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The 796 # deprecated name for this machine type is "mls1-highcpu-4". 797 # </dd> 798 # </dl> 799 "description": "A String", # Optional. The description specified for the version when it was created. 800 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment. 801 # If not set, AI Platform uses the default stable version, 1.0. For more 802 # information, see the 803 # [runtime version list](/ml-engine/docs/runtime-version-list) and 804 # [how to manage runtime versions](/ml-engine/docs/versioning). 805 "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the 806 # model. You should generally use `auto_scaling` with an appropriate 807 # `min_nodes` instead, but this option is available if you want more 808 # predictable billing. Beware that latency and error rates will increase 809 # if the traffic exceeds that capability of the system to serve it based 810 # on the selected number of nodes. 811 "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, 812 # starting from the time the model is deployed, so the cost of operating 813 # this model will be proportional to `nodes` * number of hours since 814 # last billing cycle plus the cost for each prediction performed. 815 }, 816 "predictionClass": "A String", # Optional. The fully qualified name 817 # (<var>module_name</var>.<var>class_name</var>) of a class that implements 818 # the Predictor interface described in this reference field. The module 819 # containing this class should be included in a package provided to the 820 # [`packageUris` field](#Version.FIELDS.package_uris). 821 # 822 # Specify this field if and only if you are deploying a [custom prediction 823 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines). 824 # If you specify this field, you must set 825 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. 826 # 827 # The following code sample provides the Predictor interface: 828 # 829 # ```py 830 # class Predictor(object): 831 # """Interface for constructing custom predictors.""" 832 # 833 # def predict(self, instances, **kwargs): 834 # """Performs custom prediction. 835 # 836 # Instances are the decoded values from the request. They have already 837 # been deserialized from JSON. 838 # 839 # Args: 840 # instances: A list of prediction input instances. 841 # **kwargs: A dictionary of keyword args provided as additional 842 # fields on the predict request body. 843 # 844 # Returns: 845 # A list of outputs containing the prediction results. This list must 846 # be JSON serializable. 847 # """ 848 # raise NotImplementedError() 849 # 850 # @classmethod 851 # def from_path(cls, model_dir): 852 # """Creates an instance of Predictor using the given path. 853 # 854 # Loading of the predictor should be done in this method. 855 # 856 # Args: 857 # model_dir: The local directory that contains the exported model 858 # file along with any additional files uploaded when creating the 859 # version resource. 860 # 861 # Returns: 862 # An instance implementing this Predictor class. 863 # """ 864 # raise NotImplementedError() 865 # ``` 866 # 867 # Learn more about [the Predictor interface and custom prediction 868 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines). 869 "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in 870 # response to increases and decreases in traffic. Care should be 871 # taken to ramp up traffic according to the model's ability to scale 872 # or you will start seeing increases in latency and 429 response codes. 873 "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These 874 # nodes are always up, starting from the time the model is deployed. 875 # Therefore, the cost of operating this model will be at least 876 # `rate` * `min_nodes` * number of hours since last billing cycle, 877 # where `rate` is the cost per node-hour as documented in the 878 # [pricing guide](/ml-engine/docs/pricing), 879 # even if no predictions are performed. There is additional cost for each 880 # prediction performed. 881 # 882 # Unlike manual scaling, if the load gets too heavy for the nodes 883 # that are up, the service will automatically add nodes to handle the 884 # increased load as well as scale back as traffic drops, always maintaining 885 # at least `min_nodes`. You will be charged for the time in which additional 886 # nodes are used. 887 # 888 # If not specified, `min_nodes` defaults to 0, in which case, when traffic 889 # to a model stops (and after a cool-down period), nodes will be shut down 890 # and no charges will be incurred until traffic to the model resumes. 891 # 892 # You can set `min_nodes` when creating the model version, and you can also 893 # update `min_nodes` for an existing version: 894 # <pre> 895 # update_body.json: 896 # { 897 # 'autoScaling': { 898 # 'minNodes': 5 899 # } 900 # } 901 # </pre> 902 # HTTP request: 903 # <pre> 904 # PATCH 905 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes 906 # -d @./update_body.json 907 # </pre> 908 }, 909 "serviceAccount": "A String", # Optional. Specifies the service account for resource access control. 910 "state": "A String", # Output only. The state of a version. 911 "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default 912 # version is '2.7'. Python '3.5' is available when `runtime_version` is set 913 # to '1.4' and above. Python '2.7' works with all supported runtime versions. 914 "framework": "A String", # Optional. The machine learning framework AI Platform uses to train 915 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`, 916 # `XGBOOST`. If you do not specify a framework, AI Platform 917 # will analyze files in the deployment_uri to determine a framework. If you 918 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version 919 # of the model to 1.4 or greater. 920 # 921 # Do **not** specify a framework if you're deploying a [custom 922 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines). 923 "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom 924 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines) 925 # or [scikit-learn pipelines with custom 926 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code). 927 # 928 # For a custom prediction routine, one of these packages must contain your 929 # Predictor class (see 930 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally, 931 # include any dependencies used by your Predictor or scikit-learn pipeline 932 # uses that are not already included in your selected [runtime 933 # version](/ml-engine/docs/tensorflow/runtime-version-list). 934 # 935 # If you specify this field, you must also set 936 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. 937 "A String", 938 ], 939 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help 940 # prevent simultaneous updates of a model from overwriting each other. 941 # It is strongly suggested that systems make use of the `etag` in the 942 # read-modify-write cycle to perform model updates in order to avoid race 943 # conditions: An `etag` is returned in the response to `GetVersion`, and 944 # systems are expected to put that etag in the request to `UpdateVersion` to 945 # ensure that their change will be applied to the model as intended. 946 "lastUseTime": "A String", # Output only. The time the version was last used for prediction. 947 "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to 948 # create the version. See the 949 # [guide to model 950 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more 951 # information. 952 # 953 # When passing Version to 954 # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create) 955 # the model service uses the specified location as the source of the model. 956 # Once deployed, the model version is hosted by the prediction service, so 957 # this location is useful only as a historical record. 958 # The total number of model files can't exceed 1000. 959 "createTime": "A String", # Output only. The time the version was created. 960 "isDefault": True or False, # Output only. If true, this version will be used to handle prediction 961 # requests that do not specify a version. 962 # 963 # You can change the default version by calling 964 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 965 "name": "A String", # Required.The name specified for the version when it was created. 966 # 967 # The version name must be unique within the model it is created in. 968 }, 969 "onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver 970 # Logging. These logs are like standard server access logs, containing 971 # information like timestamp and latency for each request. Note that 972 # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if 973 # your project receives prediction requests at a high queries per second rate 974 # (QPS). Estimate your costs before enabling this option. 975 # 976 # Default is false. 977 "name": "A String", # Required. The name specified for the model when it was created. 978 # 979 # The model name must be unique within the project it is created in. 980 }</pre> 981</div> 982 983<div class="method"> 984 <code class="details" id="getIamPolicy">getIamPolicy(resource, x__xgafv=None)</code> 985 <pre>Gets the access control policy for a resource. 986Returns an empty policy if the resource exists and does not have a policy 987set. 988 989Args: 990 resource: string, REQUIRED: The resource for which the policy is being requested. 991See the operation documentation for the appropriate value for this field. (required) 992 x__xgafv: string, V1 error format. 993 Allowed values 994 1 - v1 error format 995 2 - v2 error format 996 997Returns: 998 An object of the form: 999 1000 { # Defines an Identity and Access Management (IAM) policy. It is used to 1001 # specify access control policies for Cloud Platform resources. 1002 # 1003 # 1004 # A `Policy` consists of a list of `bindings`. A `binding` binds a list of 1005 # `members` to a `role`, where the members can be user accounts, Google groups, 1006 # Google domains, and service accounts. A `role` is a named list of permissions 1007 # defined by IAM. 1008 # 1009 # **JSON Example** 1010 # 1011 # { 1012 # "bindings": [ 1013 # { 1014 # "role": "roles/owner", 1015 # "members": [ 1016 # "user:mike@example.com", 1017 # "group:admins@example.com", 1018 # "domain:google.com", 1019 # "serviceAccount:my-other-app@appspot.gserviceaccount.com" 1020 # ] 1021 # }, 1022 # { 1023 # "role": "roles/viewer", 1024 # "members": ["user:sean@example.com"] 1025 # } 1026 # ] 1027 # } 1028 # 1029 # **YAML Example** 1030 # 1031 # bindings: 1032 # - members: 1033 # - user:mike@example.com 1034 # - group:admins@example.com 1035 # - domain:google.com 1036 # - serviceAccount:my-other-app@appspot.gserviceaccount.com 1037 # role: roles/owner 1038 # - members: 1039 # - user:sean@example.com 1040 # role: roles/viewer 1041 # 1042 # 1043 # For a description of IAM and its features, see the 1044 # [IAM developer's guide](https://cloud.google.com/iam/docs). 1045 "bindings": [ # Associates a list of `members` to a `role`. 1046 # `bindings` with no members will result in an error. 1047 { # Associates `members` with a `role`. 1048 "role": "A String", # Role that is assigned to `members`. 1049 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`. 1050 "members": [ # Specifies the identities requesting access for a Cloud Platform resource. 1051 # `members` can have the following values: 1052 # 1053 # * `allUsers`: A special identifier that represents anyone who is 1054 # on the internet; with or without a Google account. 1055 # 1056 # * `allAuthenticatedUsers`: A special identifier that represents anyone 1057 # who is authenticated with a Google account or a service account. 1058 # 1059 # * `user:{emailid}`: An email address that represents a specific Google 1060 # account. For example, `alice@gmail.com` . 1061 # 1062 # 1063 # * `serviceAccount:{emailid}`: An email address that represents a service 1064 # account. For example, `my-other-app@appspot.gserviceaccount.com`. 1065 # 1066 # * `group:{emailid}`: An email address that represents a Google group. 1067 # For example, `admins@example.com`. 1068 # 1069 # 1070 # * `domain:{domain}`: The G Suite domain (primary) that represents all the 1071 # users of that domain. For example, `google.com` or `example.com`. 1072 # 1073 "A String", 1074 ], 1075 "condition": { # Represents an expression text. Example: # The condition that is associated with this binding. 1076 # NOTE: An unsatisfied condition will not allow user access via current 1077 # binding. Different bindings, including their conditions, are examined 1078 # independently. 1079 # 1080 # title: "User account presence" 1081 # description: "Determines whether the request has a user account" 1082 # expression: "size(request.user) > 0" 1083 "description": "A String", # An optional description of the expression. This is a longer text which 1084 # describes the expression, e.g. when hovered over it in a UI. 1085 "expression": "A String", # Textual representation of an expression in 1086 # Common Expression Language syntax. 1087 # 1088 # The application context of the containing message determines which 1089 # well-known feature set of CEL is supported. 1090 "location": "A String", # An optional string indicating the location of the expression for error 1091 # reporting, e.g. a file name and a position in the file. 1092 "title": "A String", # An optional title for the expression, i.e. a short string describing 1093 # its purpose. This can be used e.g. in UIs which allow to enter the 1094 # expression. 1095 }, 1096 }, 1097 ], 1098 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help 1099 # prevent simultaneous updates of a policy from overwriting each other. 1100 # It is strongly suggested that systems make use of the `etag` in the 1101 # read-modify-write cycle to perform policy updates in order to avoid race 1102 # conditions: An `etag` is returned in the response to `getIamPolicy`, and 1103 # systems are expected to put that etag in the request to `setIamPolicy` to 1104 # ensure that their change will be applied to the same version of the policy. 1105 # 1106 # If no `etag` is provided in the call to `setIamPolicy`, then the existing 1107 # policy is overwritten blindly. 1108 "version": 42, # Deprecated. 1109 "auditConfigs": [ # Specifies cloud audit logging configuration for this policy. 1110 { # Specifies the audit configuration for a service. 1111 # The configuration determines which permission types are logged, and what 1112 # identities, if any, are exempted from logging. 1113 # An AuditConfig must have one or more AuditLogConfigs. 1114 # 1115 # If there are AuditConfigs for both `allServices` and a specific service, 1116 # the union of the two AuditConfigs is used for that service: the log_types 1117 # specified in each AuditConfig are enabled, and the exempted_members in each 1118 # AuditLogConfig are exempted. 1119 # 1120 # Example Policy with multiple AuditConfigs: 1121 # 1122 # { 1123 # "audit_configs": [ 1124 # { 1125 # "service": "allServices" 1126 # "audit_log_configs": [ 1127 # { 1128 # "log_type": "DATA_READ", 1129 # "exempted_members": [ 1130 # "user:foo@gmail.com" 1131 # ] 1132 # }, 1133 # { 1134 # "log_type": "DATA_WRITE", 1135 # }, 1136 # { 1137 # "log_type": "ADMIN_READ", 1138 # } 1139 # ] 1140 # }, 1141 # { 1142 # "service": "fooservice.googleapis.com" 1143 # "audit_log_configs": [ 1144 # { 1145 # "log_type": "DATA_READ", 1146 # }, 1147 # { 1148 # "log_type": "DATA_WRITE", 1149 # "exempted_members": [ 1150 # "user:bar@gmail.com" 1151 # ] 1152 # } 1153 # ] 1154 # } 1155 # ] 1156 # } 1157 # 1158 # For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ 1159 # logging. It also exempts foo@gmail.com from DATA_READ logging, and 1160 # bar@gmail.com from DATA_WRITE logging. 1161 "auditLogConfigs": [ # The configuration for logging of each type of permission. 1162 { # Provides the configuration for logging a type of permissions. 1163 # Example: 1164 # 1165 # { 1166 # "audit_log_configs": [ 1167 # { 1168 # "log_type": "DATA_READ", 1169 # "exempted_members": [ 1170 # "user:foo@gmail.com" 1171 # ] 1172 # }, 1173 # { 1174 # "log_type": "DATA_WRITE", 1175 # } 1176 # ] 1177 # } 1178 # 1179 # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting 1180 # foo@gmail.com from DATA_READ logging. 1181 "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of 1182 # permission. 1183 # Follows the same format of Binding.members. 1184 "A String", 1185 ], 1186 "logType": "A String", # The log type that this config enables. 1187 }, 1188 ], 1189 "service": "A String", # Specifies a service that will be enabled for audit logging. 1190 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`. 1191 # `allServices` is a special value that covers all services. 1192 }, 1193 ], 1194 }</pre> 1195</div> 1196 1197<div class="method"> 1198 <code class="details" id="list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</code> 1199 <pre>Lists the models in a project. 1200 1201Each project can contain multiple models, and each model can have multiple 1202versions. 1203 1204If there are no models that match the request parameters, the list request 1205returns an empty response body: {}. 1206 1207Args: 1208 parent: string, Required. The name of the project whose models are to be listed. (required) 1209 pageToken: string, Optional. A page token to request the next page of results. 1210 1211You get the token from the `next_page_token` field of the response from 1212the previous call. 1213 x__xgafv: string, V1 error format. 1214 Allowed values 1215 1 - v1 error format 1216 2 - v2 error format 1217 pageSize: integer, Optional. The number of models to retrieve per "page" of results. If there 1218are more remaining results than this number, the response message will 1219contain a valid value in the `next_page_token` field. 1220 1221The default value is 20, and the maximum page size is 100. 1222 filter: string, Optional. Specifies the subset of models to retrieve. 1223 1224Returns: 1225 An object of the form: 1226 1227 { # Response message for the ListModels method. 1228 "nextPageToken": "A String", # Optional. Pass this token as the `page_token` field of the request for a 1229 # subsequent call. 1230 "models": [ # The list of models. 1231 { # Represents a machine learning solution. 1232 # 1233 # A model can have multiple versions, each of which is a deployed, trained 1234 # model ready to receive prediction requests. The model itself is just a 1235 # container. 1236 "description": "A String", # Optional. The description specified for the model when it was created. 1237 "onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout` 1238 # streams to Stackdriver Logging. These can be more verbose than the standard 1239 # access logs (see `onlinePredictionLogging`) and can incur higher cost. 1240 # However, they are helpful for debugging. Note that 1241 # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if 1242 # your project receives prediction requests at a high QPS. Estimate your 1243 # costs before enabling this option. 1244 # 1245 # Default is false. 1246 "labels": { # Optional. One or more labels that you can add, to organize your models. 1247 # Each label is a key-value pair, where both the key and the value are 1248 # arbitrary strings that you supply. 1249 # For more information, see the documentation on 1250 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. 1251 "a_key": "A String", 1252 }, 1253 "regions": [ # Optional. The list of regions where the model is going to be deployed. 1254 # Currently only one region per model is supported. 1255 # Defaults to 'us-central1' if nothing is set. 1256 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a> 1257 # for AI Platform services. 1258 # Note: 1259 # * No matter where a model is deployed, it can always be accessed by 1260 # users from anywhere, both for online and batch prediction. 1261 # * The region for a batch prediction job is set by the region field when 1262 # submitting the batch prediction job and does not take its value from 1263 # this field. 1264 "A String", 1265 ], 1266 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help 1267 # prevent simultaneous updates of a model from overwriting each other. 1268 # It is strongly suggested that systems make use of the `etag` in the 1269 # read-modify-write cycle to perform model updates in order to avoid race 1270 # conditions: An `etag` is returned in the response to `GetModel`, and 1271 # systems are expected to put that etag in the request to `UpdateModel` to 1272 # ensure that their change will be applied to the model as intended. 1273 "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to 1274 # handle prediction requests that do not specify a version. 1275 # 1276 # You can change the default version by calling 1277 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 1278 # 1279 # Each version is a trained model deployed in the cloud, ready to handle 1280 # prediction requests. A model can have multiple versions. You can get 1281 # information about all of the versions of a given model by calling 1282 # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). 1283 "errorMessage": "A String", # Output only. The details of a failure or a cancellation. 1284 "labels": { # Optional. One or more labels that you can add, to organize your model 1285 # versions. Each label is a key-value pair, where both the key and the value 1286 # are arbitrary strings that you supply. 1287 # For more information, see the documentation on 1288 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. 1289 "a_key": "A String", 1290 }, 1291 "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only 1292 # applies to online prediction service. 1293 # <dl> 1294 # <dt>mls1-c1-m2</dt> 1295 # <dd> 1296 # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated 1297 # name for this machine type is "mls1-highmem-1". 1298 # </dd> 1299 # <dt>mls1-c4-m2</dt> 1300 # <dd> 1301 # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The 1302 # deprecated name for this machine type is "mls1-highcpu-4". 1303 # </dd> 1304 # </dl> 1305 "description": "A String", # Optional. The description specified for the version when it was created. 1306 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment. 1307 # If not set, AI Platform uses the default stable version, 1.0. For more 1308 # information, see the 1309 # [runtime version list](/ml-engine/docs/runtime-version-list) and 1310 # [how to manage runtime versions](/ml-engine/docs/versioning). 1311 "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the 1312 # model. You should generally use `auto_scaling` with an appropriate 1313 # `min_nodes` instead, but this option is available if you want more 1314 # predictable billing. Beware that latency and error rates will increase 1315 # if the traffic exceeds that capability of the system to serve it based 1316 # on the selected number of nodes. 1317 "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, 1318 # starting from the time the model is deployed, so the cost of operating 1319 # this model will be proportional to `nodes` * number of hours since 1320 # last billing cycle plus the cost for each prediction performed. 1321 }, 1322 "predictionClass": "A String", # Optional. The fully qualified name 1323 # (<var>module_name</var>.<var>class_name</var>) of a class that implements 1324 # the Predictor interface described in this reference field. The module 1325 # containing this class should be included in a package provided to the 1326 # [`packageUris` field](#Version.FIELDS.package_uris). 1327 # 1328 # Specify this field if and only if you are deploying a [custom prediction 1329 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines). 1330 # If you specify this field, you must set 1331 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. 1332 # 1333 # The following code sample provides the Predictor interface: 1334 # 1335 # ```py 1336 # class Predictor(object): 1337 # """Interface for constructing custom predictors.""" 1338 # 1339 # def predict(self, instances, **kwargs): 1340 # """Performs custom prediction. 1341 # 1342 # Instances are the decoded values from the request. They have already 1343 # been deserialized from JSON. 1344 # 1345 # Args: 1346 # instances: A list of prediction input instances. 1347 # **kwargs: A dictionary of keyword args provided as additional 1348 # fields on the predict request body. 1349 # 1350 # Returns: 1351 # A list of outputs containing the prediction results. This list must 1352 # be JSON serializable. 1353 # """ 1354 # raise NotImplementedError() 1355 # 1356 # @classmethod 1357 # def from_path(cls, model_dir): 1358 # """Creates an instance of Predictor using the given path. 1359 # 1360 # Loading of the predictor should be done in this method. 1361 # 1362 # Args: 1363 # model_dir: The local directory that contains the exported model 1364 # file along with any additional files uploaded when creating the 1365 # version resource. 1366 # 1367 # Returns: 1368 # An instance implementing this Predictor class. 1369 # """ 1370 # raise NotImplementedError() 1371 # ``` 1372 # 1373 # Learn more about [the Predictor interface and custom prediction 1374 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines). 1375 "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in 1376 # response to increases and decreases in traffic. Care should be 1377 # taken to ramp up traffic according to the model's ability to scale 1378 # or you will start seeing increases in latency and 429 response codes. 1379 "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These 1380 # nodes are always up, starting from the time the model is deployed. 1381 # Therefore, the cost of operating this model will be at least 1382 # `rate` * `min_nodes` * number of hours since last billing cycle, 1383 # where `rate` is the cost per node-hour as documented in the 1384 # [pricing guide](/ml-engine/docs/pricing), 1385 # even if no predictions are performed. There is additional cost for each 1386 # prediction performed. 1387 # 1388 # Unlike manual scaling, if the load gets too heavy for the nodes 1389 # that are up, the service will automatically add nodes to handle the 1390 # increased load as well as scale back as traffic drops, always maintaining 1391 # at least `min_nodes`. You will be charged for the time in which additional 1392 # nodes are used. 1393 # 1394 # If not specified, `min_nodes` defaults to 0, in which case, when traffic 1395 # to a model stops (and after a cool-down period), nodes will be shut down 1396 # and no charges will be incurred until traffic to the model resumes. 1397 # 1398 # You can set `min_nodes` when creating the model version, and you can also 1399 # update `min_nodes` for an existing version: 1400 # <pre> 1401 # update_body.json: 1402 # { 1403 # 'autoScaling': { 1404 # 'minNodes': 5 1405 # } 1406 # } 1407 # </pre> 1408 # HTTP request: 1409 # <pre> 1410 # PATCH 1411 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes 1412 # -d @./update_body.json 1413 # </pre> 1414 }, 1415 "serviceAccount": "A String", # Optional. Specifies the service account for resource access control. 1416 "state": "A String", # Output only. The state of a version. 1417 "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default 1418 # version is '2.7'. Python '3.5' is available when `runtime_version` is set 1419 # to '1.4' and above. Python '2.7' works with all supported runtime versions. 1420 "framework": "A String", # Optional. The machine learning framework AI Platform uses to train 1421 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`, 1422 # `XGBOOST`. If you do not specify a framework, AI Platform 1423 # will analyze files in the deployment_uri to determine a framework. If you 1424 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version 1425 # of the model to 1.4 or greater. 1426 # 1427 # Do **not** specify a framework if you're deploying a [custom 1428 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines). 1429 "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom 1430 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines) 1431 # or [scikit-learn pipelines with custom 1432 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code). 1433 # 1434 # For a custom prediction routine, one of these packages must contain your 1435 # Predictor class (see 1436 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally, 1437 # include any dependencies used by your Predictor or scikit-learn pipeline 1438 # uses that are not already included in your selected [runtime 1439 # version](/ml-engine/docs/tensorflow/runtime-version-list). 1440 # 1441 # If you specify this field, you must also set 1442 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. 1443 "A String", 1444 ], 1445 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help 1446 # prevent simultaneous updates of a model from overwriting each other. 1447 # It is strongly suggested that systems make use of the `etag` in the 1448 # read-modify-write cycle to perform model updates in order to avoid race 1449 # conditions: An `etag` is returned in the response to `GetVersion`, and 1450 # systems are expected to put that etag in the request to `UpdateVersion` to 1451 # ensure that their change will be applied to the model as intended. 1452 "lastUseTime": "A String", # Output only. The time the version was last used for prediction. 1453 "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to 1454 # create the version. See the 1455 # [guide to model 1456 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more 1457 # information. 1458 # 1459 # When passing Version to 1460 # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create) 1461 # the model service uses the specified location as the source of the model. 1462 # Once deployed, the model version is hosted by the prediction service, so 1463 # this location is useful only as a historical record. 1464 # The total number of model files can't exceed 1000. 1465 "createTime": "A String", # Output only. The time the version was created. 1466 "isDefault": True or False, # Output only. If true, this version will be used to handle prediction 1467 # requests that do not specify a version. 1468 # 1469 # You can change the default version by calling 1470 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 1471 "name": "A String", # Required.The name specified for the version when it was created. 1472 # 1473 # The version name must be unique within the model it is created in. 1474 }, 1475 "onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver 1476 # Logging. These logs are like standard server access logs, containing 1477 # information like timestamp and latency for each request. Note that 1478 # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if 1479 # your project receives prediction requests at a high queries per second rate 1480 # (QPS). Estimate your costs before enabling this option. 1481 # 1482 # Default is false. 1483 "name": "A String", # Required. The name specified for the model when it was created. 1484 # 1485 # The model name must be unique within the project it is created in. 1486 }, 1487 ], 1488 }</pre> 1489</div> 1490 1491<div class="method"> 1492 <code class="details" id="list_next">list_next(previous_request, previous_response)</code> 1493 <pre>Retrieves the next page of results. 1494 1495Args: 1496 previous_request: The request for the previous page. (required) 1497 previous_response: The response from the request for the previous page. (required) 1498 1499Returns: 1500 A request object that you can call 'execute()' on to request the next 1501 page. Returns None if there are no more items in the collection. 1502 </pre> 1503</div> 1504 1505<div class="method"> 1506 <code class="details" id="patch">patch(name, body, updateMask=None, x__xgafv=None)</code> 1507 <pre>Updates a specific model resource. 1508 1509Currently the only supported fields to update are `description` and 1510`default_version.name`. 1511 1512Args: 1513 name: string, Required. The project name. (required) 1514 body: object, The request body. (required) 1515 The object takes the form of: 1516 1517{ # Represents a machine learning solution. 1518 # 1519 # A model can have multiple versions, each of which is a deployed, trained 1520 # model ready to receive prediction requests. The model itself is just a 1521 # container. 1522 "description": "A String", # Optional. The description specified for the model when it was created. 1523 "onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout` 1524 # streams to Stackdriver Logging. These can be more verbose than the standard 1525 # access logs (see `onlinePredictionLogging`) and can incur higher cost. 1526 # However, they are helpful for debugging. Note that 1527 # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if 1528 # your project receives prediction requests at a high QPS. Estimate your 1529 # costs before enabling this option. 1530 # 1531 # Default is false. 1532 "labels": { # Optional. One or more labels that you can add, to organize your models. 1533 # Each label is a key-value pair, where both the key and the value are 1534 # arbitrary strings that you supply. 1535 # For more information, see the documentation on 1536 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. 1537 "a_key": "A String", 1538 }, 1539 "regions": [ # Optional. The list of regions where the model is going to be deployed. 1540 # Currently only one region per model is supported. 1541 # Defaults to 'us-central1' if nothing is set. 1542 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a> 1543 # for AI Platform services. 1544 # Note: 1545 # * No matter where a model is deployed, it can always be accessed by 1546 # users from anywhere, both for online and batch prediction. 1547 # * The region for a batch prediction job is set by the region field when 1548 # submitting the batch prediction job and does not take its value from 1549 # this field. 1550 "A String", 1551 ], 1552 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help 1553 # prevent simultaneous updates of a model from overwriting each other. 1554 # It is strongly suggested that systems make use of the `etag` in the 1555 # read-modify-write cycle to perform model updates in order to avoid race 1556 # conditions: An `etag` is returned in the response to `GetModel`, and 1557 # systems are expected to put that etag in the request to `UpdateModel` to 1558 # ensure that their change will be applied to the model as intended. 1559 "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to 1560 # handle prediction requests that do not specify a version. 1561 # 1562 # You can change the default version by calling 1563 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 1564 # 1565 # Each version is a trained model deployed in the cloud, ready to handle 1566 # prediction requests. A model can have multiple versions. You can get 1567 # information about all of the versions of a given model by calling 1568 # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). 1569 "errorMessage": "A String", # Output only. The details of a failure or a cancellation. 1570 "labels": { # Optional. One or more labels that you can add, to organize your model 1571 # versions. Each label is a key-value pair, where both the key and the value 1572 # are arbitrary strings that you supply. 1573 # For more information, see the documentation on 1574 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. 1575 "a_key": "A String", 1576 }, 1577 "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only 1578 # applies to online prediction service. 1579 # <dl> 1580 # <dt>mls1-c1-m2</dt> 1581 # <dd> 1582 # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated 1583 # name for this machine type is "mls1-highmem-1". 1584 # </dd> 1585 # <dt>mls1-c4-m2</dt> 1586 # <dd> 1587 # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The 1588 # deprecated name for this machine type is "mls1-highcpu-4". 1589 # </dd> 1590 # </dl> 1591 "description": "A String", # Optional. The description specified for the version when it was created. 1592 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment. 1593 # If not set, AI Platform uses the default stable version, 1.0. For more 1594 # information, see the 1595 # [runtime version list](/ml-engine/docs/runtime-version-list) and 1596 # [how to manage runtime versions](/ml-engine/docs/versioning). 1597 "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the 1598 # model. You should generally use `auto_scaling` with an appropriate 1599 # `min_nodes` instead, but this option is available if you want more 1600 # predictable billing. Beware that latency and error rates will increase 1601 # if the traffic exceeds that capability of the system to serve it based 1602 # on the selected number of nodes. 1603 "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, 1604 # starting from the time the model is deployed, so the cost of operating 1605 # this model will be proportional to `nodes` * number of hours since 1606 # last billing cycle plus the cost for each prediction performed. 1607 }, 1608 "predictionClass": "A String", # Optional. The fully qualified name 1609 # (<var>module_name</var>.<var>class_name</var>) of a class that implements 1610 # the Predictor interface described in this reference field. The module 1611 # containing this class should be included in a package provided to the 1612 # [`packageUris` field](#Version.FIELDS.package_uris). 1613 # 1614 # Specify this field if and only if you are deploying a [custom prediction 1615 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines). 1616 # If you specify this field, you must set 1617 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. 1618 # 1619 # The following code sample provides the Predictor interface: 1620 # 1621 # ```py 1622 # class Predictor(object): 1623 # """Interface for constructing custom predictors.""" 1624 # 1625 # def predict(self, instances, **kwargs): 1626 # """Performs custom prediction. 1627 # 1628 # Instances are the decoded values from the request. They have already 1629 # been deserialized from JSON. 1630 # 1631 # Args: 1632 # instances: A list of prediction input instances. 1633 # **kwargs: A dictionary of keyword args provided as additional 1634 # fields on the predict request body. 1635 # 1636 # Returns: 1637 # A list of outputs containing the prediction results. This list must 1638 # be JSON serializable. 1639 # """ 1640 # raise NotImplementedError() 1641 # 1642 # @classmethod 1643 # def from_path(cls, model_dir): 1644 # """Creates an instance of Predictor using the given path. 1645 # 1646 # Loading of the predictor should be done in this method. 1647 # 1648 # Args: 1649 # model_dir: The local directory that contains the exported model 1650 # file along with any additional files uploaded when creating the 1651 # version resource. 1652 # 1653 # Returns: 1654 # An instance implementing this Predictor class. 1655 # """ 1656 # raise NotImplementedError() 1657 # ``` 1658 # 1659 # Learn more about [the Predictor interface and custom prediction 1660 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines). 1661 "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in 1662 # response to increases and decreases in traffic. Care should be 1663 # taken to ramp up traffic according to the model's ability to scale 1664 # or you will start seeing increases in latency and 429 response codes. 1665 "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These 1666 # nodes are always up, starting from the time the model is deployed. 1667 # Therefore, the cost of operating this model will be at least 1668 # `rate` * `min_nodes` * number of hours since last billing cycle, 1669 # where `rate` is the cost per node-hour as documented in the 1670 # [pricing guide](/ml-engine/docs/pricing), 1671 # even if no predictions are performed. There is additional cost for each 1672 # prediction performed. 1673 # 1674 # Unlike manual scaling, if the load gets too heavy for the nodes 1675 # that are up, the service will automatically add nodes to handle the 1676 # increased load as well as scale back as traffic drops, always maintaining 1677 # at least `min_nodes`. You will be charged for the time in which additional 1678 # nodes are used. 1679 # 1680 # If not specified, `min_nodes` defaults to 0, in which case, when traffic 1681 # to a model stops (and after a cool-down period), nodes will be shut down 1682 # and no charges will be incurred until traffic to the model resumes. 1683 # 1684 # You can set `min_nodes` when creating the model version, and you can also 1685 # update `min_nodes` for an existing version: 1686 # <pre> 1687 # update_body.json: 1688 # { 1689 # 'autoScaling': { 1690 # 'minNodes': 5 1691 # } 1692 # } 1693 # </pre> 1694 # HTTP request: 1695 # <pre> 1696 # PATCH 1697 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes 1698 # -d @./update_body.json 1699 # </pre> 1700 }, 1701 "serviceAccount": "A String", # Optional. Specifies the service account for resource access control. 1702 "state": "A String", # Output only. The state of a version. 1703 "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default 1704 # version is '2.7'. Python '3.5' is available when `runtime_version` is set 1705 # to '1.4' and above. Python '2.7' works with all supported runtime versions. 1706 "framework": "A String", # Optional. The machine learning framework AI Platform uses to train 1707 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`, 1708 # `XGBOOST`. If you do not specify a framework, AI Platform 1709 # will analyze files in the deployment_uri to determine a framework. If you 1710 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version 1711 # of the model to 1.4 or greater. 1712 # 1713 # Do **not** specify a framework if you're deploying a [custom 1714 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines). 1715 "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom 1716 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines) 1717 # or [scikit-learn pipelines with custom 1718 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code). 1719 # 1720 # For a custom prediction routine, one of these packages must contain your 1721 # Predictor class (see 1722 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally, 1723 # include any dependencies used by your Predictor or scikit-learn pipeline 1724 # uses that are not already included in your selected [runtime 1725 # version](/ml-engine/docs/tensorflow/runtime-version-list). 1726 # 1727 # If you specify this field, you must also set 1728 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. 1729 "A String", 1730 ], 1731 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help 1732 # prevent simultaneous updates of a model from overwriting each other. 1733 # It is strongly suggested that systems make use of the `etag` in the 1734 # read-modify-write cycle to perform model updates in order to avoid race 1735 # conditions: An `etag` is returned in the response to `GetVersion`, and 1736 # systems are expected to put that etag in the request to `UpdateVersion` to 1737 # ensure that their change will be applied to the model as intended. 1738 "lastUseTime": "A String", # Output only. The time the version was last used for prediction. 1739 "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to 1740 # create the version. See the 1741 # [guide to model 1742 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more 1743 # information. 1744 # 1745 # When passing Version to 1746 # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create) 1747 # the model service uses the specified location as the source of the model. 1748 # Once deployed, the model version is hosted by the prediction service, so 1749 # this location is useful only as a historical record. 1750 # The total number of model files can't exceed 1000. 1751 "createTime": "A String", # Output only. The time the version was created. 1752 "isDefault": True or False, # Output only. If true, this version will be used to handle prediction 1753 # requests that do not specify a version. 1754 # 1755 # You can change the default version by calling 1756 # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). 1757 "name": "A String", # Required.The name specified for the version when it was created. 1758 # 1759 # The version name must be unique within the model it is created in. 1760 }, 1761 "onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver 1762 # Logging. These logs are like standard server access logs, containing 1763 # information like timestamp and latency for each request. Note that 1764 # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if 1765 # your project receives prediction requests at a high queries per second rate 1766 # (QPS). Estimate your costs before enabling this option. 1767 # 1768 # Default is false. 1769 "name": "A String", # Required. The name specified for the model when it was created. 1770 # 1771 # The model name must be unique within the project it is created in. 1772} 1773 1774 updateMask: string, Required. Specifies the path, relative to `Model`, of the field to update. 1775 1776For example, to change the description of a model to "foo" and set its 1777default version to "version_1", the `update_mask` parameter would be 1778specified as `description`, `default_version.name`, and the `PATCH` 1779request body would specify the new value, as follows: 1780 { 1781 "description": "foo", 1782 "defaultVersion": { 1783 "name":"version_1" 1784 } 1785 } 1786 1787Currently the supported update masks are `description` and 1788`default_version.name`. 1789 x__xgafv: string, V1 error format. 1790 Allowed values 1791 1 - v1 error format 1792 2 - v2 error format 1793 1794Returns: 1795 An object of the form: 1796 1797 { # This resource represents a long-running operation that is the result of a 1798 # network API call. 1799 "metadata": { # Service-specific metadata associated with the operation. It typically 1800 # contains progress information and common metadata such as create time. 1801 # Some services might not provide such metadata. Any method that returns a 1802 # long-running operation should document the metadata type, if any. 1803 "a_key": "", # Properties of the object. Contains field @type with type URL. 1804 }, 1805 "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation. 1806 # different programming environments, including REST APIs and RPC APIs. It is 1807 # used by [gRPC](https://github.com/grpc). Each `Status` message contains 1808 # three pieces of data: error code, error message, and error details. 1809 # 1810 # You can find out more about this error model and how to work with it in the 1811 # [API Design Guide](https://cloud.google.com/apis/design/errors). 1812 "message": "A String", # A developer-facing error message, which should be in English. Any 1813 # user-facing error message should be localized and sent in the 1814 # google.rpc.Status.details field, or localized by the client. 1815 "code": 42, # The status code, which should be an enum value of google.rpc.Code. 1816 "details": [ # A list of messages that carry the error details. There is a common set of 1817 # message types for APIs to use. 1818 { 1819 "a_key": "", # Properties of the object. Contains field @type with type URL. 1820 }, 1821 ], 1822 }, 1823 "done": True or False, # If the value is `false`, it means the operation is still in progress. 1824 # If `true`, the operation is completed, and either `error` or `response` is 1825 # available. 1826 "response": { # The normal response of the operation in case of success. If the original 1827 # method returns no data on success, such as `Delete`, the response is 1828 # `google.protobuf.Empty`. If the original method is standard 1829 # `Get`/`Create`/`Update`, the response should be the resource. For other 1830 # methods, the response should have the type `XxxResponse`, where `Xxx` 1831 # is the original method name. For example, if the original method name 1832 # is `TakeSnapshot()`, the inferred response type is 1833 # `TakeSnapshotResponse`. 1834 "a_key": "", # Properties of the object. Contains field @type with type URL. 1835 }, 1836 "name": "A String", # The server-assigned name, which is only unique within the same service that 1837 # originally returns it. If you use the default HTTP mapping, the 1838 # `name` should be a resource name ending with `operations/{unique_id}`. 1839 }</pre> 1840</div> 1841 1842<div class="method"> 1843 <code class="details" id="setIamPolicy">setIamPolicy(resource, body, x__xgafv=None)</code> 1844 <pre>Sets the access control policy on the specified resource. Replaces any 1845existing policy. 1846 1847Args: 1848 resource: string, REQUIRED: The resource for which the policy is being specified. 1849See the operation documentation for the appropriate value for this field. (required) 1850 body: object, The request body. (required) 1851 The object takes the form of: 1852 1853{ # Request message for `SetIamPolicy` method. 1854 "policy": { # Defines an Identity and Access Management (IAM) policy. It is used to # REQUIRED: The complete policy to be applied to the `resource`. The size of 1855 # the policy is limited to a few 10s of KB. An empty policy is a 1856 # valid policy but certain Cloud Platform services (such as Projects) 1857 # might reject them. 1858 # specify access control policies for Cloud Platform resources. 1859 # 1860 # 1861 # A `Policy` consists of a list of `bindings`. A `binding` binds a list of 1862 # `members` to a `role`, where the members can be user accounts, Google groups, 1863 # Google domains, and service accounts. A `role` is a named list of permissions 1864 # defined by IAM. 1865 # 1866 # **JSON Example** 1867 # 1868 # { 1869 # "bindings": [ 1870 # { 1871 # "role": "roles/owner", 1872 # "members": [ 1873 # "user:mike@example.com", 1874 # "group:admins@example.com", 1875 # "domain:google.com", 1876 # "serviceAccount:my-other-app@appspot.gserviceaccount.com" 1877 # ] 1878 # }, 1879 # { 1880 # "role": "roles/viewer", 1881 # "members": ["user:sean@example.com"] 1882 # } 1883 # ] 1884 # } 1885 # 1886 # **YAML Example** 1887 # 1888 # bindings: 1889 # - members: 1890 # - user:mike@example.com 1891 # - group:admins@example.com 1892 # - domain:google.com 1893 # - serviceAccount:my-other-app@appspot.gserviceaccount.com 1894 # role: roles/owner 1895 # - members: 1896 # - user:sean@example.com 1897 # role: roles/viewer 1898 # 1899 # 1900 # For a description of IAM and its features, see the 1901 # [IAM developer's guide](https://cloud.google.com/iam/docs). 1902 "bindings": [ # Associates a list of `members` to a `role`. 1903 # `bindings` with no members will result in an error. 1904 { # Associates `members` with a `role`. 1905 "role": "A String", # Role that is assigned to `members`. 1906 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`. 1907 "members": [ # Specifies the identities requesting access for a Cloud Platform resource. 1908 # `members` can have the following values: 1909 # 1910 # * `allUsers`: A special identifier that represents anyone who is 1911 # on the internet; with or without a Google account. 1912 # 1913 # * `allAuthenticatedUsers`: A special identifier that represents anyone 1914 # who is authenticated with a Google account or a service account. 1915 # 1916 # * `user:{emailid}`: An email address that represents a specific Google 1917 # account. For example, `alice@gmail.com` . 1918 # 1919 # 1920 # * `serviceAccount:{emailid}`: An email address that represents a service 1921 # account. For example, `my-other-app@appspot.gserviceaccount.com`. 1922 # 1923 # * `group:{emailid}`: An email address that represents a Google group. 1924 # For example, `admins@example.com`. 1925 # 1926 # 1927 # * `domain:{domain}`: The G Suite domain (primary) that represents all the 1928 # users of that domain. For example, `google.com` or `example.com`. 1929 # 1930 "A String", 1931 ], 1932 "condition": { # Represents an expression text. Example: # The condition that is associated with this binding. 1933 # NOTE: An unsatisfied condition will not allow user access via current 1934 # binding. Different bindings, including their conditions, are examined 1935 # independently. 1936 # 1937 # title: "User account presence" 1938 # description: "Determines whether the request has a user account" 1939 # expression: "size(request.user) > 0" 1940 "description": "A String", # An optional description of the expression. This is a longer text which 1941 # describes the expression, e.g. when hovered over it in a UI. 1942 "expression": "A String", # Textual representation of an expression in 1943 # Common Expression Language syntax. 1944 # 1945 # The application context of the containing message determines which 1946 # well-known feature set of CEL is supported. 1947 "location": "A String", # An optional string indicating the location of the expression for error 1948 # reporting, e.g. a file name and a position in the file. 1949 "title": "A String", # An optional title for the expression, i.e. a short string describing 1950 # its purpose. This can be used e.g. in UIs which allow to enter the 1951 # expression. 1952 }, 1953 }, 1954 ], 1955 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help 1956 # prevent simultaneous updates of a policy from overwriting each other. 1957 # It is strongly suggested that systems make use of the `etag` in the 1958 # read-modify-write cycle to perform policy updates in order to avoid race 1959 # conditions: An `etag` is returned in the response to `getIamPolicy`, and 1960 # systems are expected to put that etag in the request to `setIamPolicy` to 1961 # ensure that their change will be applied to the same version of the policy. 1962 # 1963 # If no `etag` is provided in the call to `setIamPolicy`, then the existing 1964 # policy is overwritten blindly. 1965 "version": 42, # Deprecated. 1966 "auditConfigs": [ # Specifies cloud audit logging configuration for this policy. 1967 { # Specifies the audit configuration for a service. 1968 # The configuration determines which permission types are logged, and what 1969 # identities, if any, are exempted from logging. 1970 # An AuditConfig must have one or more AuditLogConfigs. 1971 # 1972 # If there are AuditConfigs for both `allServices` and a specific service, 1973 # the union of the two AuditConfigs is used for that service: the log_types 1974 # specified in each AuditConfig are enabled, and the exempted_members in each 1975 # AuditLogConfig are exempted. 1976 # 1977 # Example Policy with multiple AuditConfigs: 1978 # 1979 # { 1980 # "audit_configs": [ 1981 # { 1982 # "service": "allServices" 1983 # "audit_log_configs": [ 1984 # { 1985 # "log_type": "DATA_READ", 1986 # "exempted_members": [ 1987 # "user:foo@gmail.com" 1988 # ] 1989 # }, 1990 # { 1991 # "log_type": "DATA_WRITE", 1992 # }, 1993 # { 1994 # "log_type": "ADMIN_READ", 1995 # } 1996 # ] 1997 # }, 1998 # { 1999 # "service": "fooservice.googleapis.com" 2000 # "audit_log_configs": [ 2001 # { 2002 # "log_type": "DATA_READ", 2003 # }, 2004 # { 2005 # "log_type": "DATA_WRITE", 2006 # "exempted_members": [ 2007 # "user:bar@gmail.com" 2008 # ] 2009 # } 2010 # ] 2011 # } 2012 # ] 2013 # } 2014 # 2015 # For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ 2016 # logging. It also exempts foo@gmail.com from DATA_READ logging, and 2017 # bar@gmail.com from DATA_WRITE logging. 2018 "auditLogConfigs": [ # The configuration for logging of each type of permission. 2019 { # Provides the configuration for logging a type of permissions. 2020 # Example: 2021 # 2022 # { 2023 # "audit_log_configs": [ 2024 # { 2025 # "log_type": "DATA_READ", 2026 # "exempted_members": [ 2027 # "user:foo@gmail.com" 2028 # ] 2029 # }, 2030 # { 2031 # "log_type": "DATA_WRITE", 2032 # } 2033 # ] 2034 # } 2035 # 2036 # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting 2037 # foo@gmail.com from DATA_READ logging. 2038 "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of 2039 # permission. 2040 # Follows the same format of Binding.members. 2041 "A String", 2042 ], 2043 "logType": "A String", # The log type that this config enables. 2044 }, 2045 ], 2046 "service": "A String", # Specifies a service that will be enabled for audit logging. 2047 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`. 2048 # `allServices` is a special value that covers all services. 2049 }, 2050 ], 2051 }, 2052 "updateMask": "A String", # OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only 2053 # the fields in the mask will be modified. If no mask is provided, the 2054 # following default mask is used: 2055 # paths: "bindings, etag" 2056 # This field is only used by Cloud IAM. 2057 } 2058 2059 x__xgafv: string, V1 error format. 2060 Allowed values 2061 1 - v1 error format 2062 2 - v2 error format 2063 2064Returns: 2065 An object of the form: 2066 2067 { # Defines an Identity and Access Management (IAM) policy. It is used to 2068 # specify access control policies for Cloud Platform resources. 2069 # 2070 # 2071 # A `Policy` consists of a list of `bindings`. A `binding` binds a list of 2072 # `members` to a `role`, where the members can be user accounts, Google groups, 2073 # Google domains, and service accounts. A `role` is a named list of permissions 2074 # defined by IAM. 2075 # 2076 # **JSON Example** 2077 # 2078 # { 2079 # "bindings": [ 2080 # { 2081 # "role": "roles/owner", 2082 # "members": [ 2083 # "user:mike@example.com", 2084 # "group:admins@example.com", 2085 # "domain:google.com", 2086 # "serviceAccount:my-other-app@appspot.gserviceaccount.com" 2087 # ] 2088 # }, 2089 # { 2090 # "role": "roles/viewer", 2091 # "members": ["user:sean@example.com"] 2092 # } 2093 # ] 2094 # } 2095 # 2096 # **YAML Example** 2097 # 2098 # bindings: 2099 # - members: 2100 # - user:mike@example.com 2101 # - group:admins@example.com 2102 # - domain:google.com 2103 # - serviceAccount:my-other-app@appspot.gserviceaccount.com 2104 # role: roles/owner 2105 # - members: 2106 # - user:sean@example.com 2107 # role: roles/viewer 2108 # 2109 # 2110 # For a description of IAM and its features, see the 2111 # [IAM developer's guide](https://cloud.google.com/iam/docs). 2112 "bindings": [ # Associates a list of `members` to a `role`. 2113 # `bindings` with no members will result in an error. 2114 { # Associates `members` with a `role`. 2115 "role": "A String", # Role that is assigned to `members`. 2116 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`. 2117 "members": [ # Specifies the identities requesting access for a Cloud Platform resource. 2118 # `members` can have the following values: 2119 # 2120 # * `allUsers`: A special identifier that represents anyone who is 2121 # on the internet; with or without a Google account. 2122 # 2123 # * `allAuthenticatedUsers`: A special identifier that represents anyone 2124 # who is authenticated with a Google account or a service account. 2125 # 2126 # * `user:{emailid}`: An email address that represents a specific Google 2127 # account. For example, `alice@gmail.com` . 2128 # 2129 # 2130 # * `serviceAccount:{emailid}`: An email address that represents a service 2131 # account. For example, `my-other-app@appspot.gserviceaccount.com`. 2132 # 2133 # * `group:{emailid}`: An email address that represents a Google group. 2134 # For example, `admins@example.com`. 2135 # 2136 # 2137 # * `domain:{domain}`: The G Suite domain (primary) that represents all the 2138 # users of that domain. For example, `google.com` or `example.com`. 2139 # 2140 "A String", 2141 ], 2142 "condition": { # Represents an expression text. Example: # The condition that is associated with this binding. 2143 # NOTE: An unsatisfied condition will not allow user access via current 2144 # binding. Different bindings, including their conditions, are examined 2145 # independently. 2146 # 2147 # title: "User account presence" 2148 # description: "Determines whether the request has a user account" 2149 # expression: "size(request.user) > 0" 2150 "description": "A String", # An optional description of the expression. This is a longer text which 2151 # describes the expression, e.g. when hovered over it in a UI. 2152 "expression": "A String", # Textual representation of an expression in 2153 # Common Expression Language syntax. 2154 # 2155 # The application context of the containing message determines which 2156 # well-known feature set of CEL is supported. 2157 "location": "A String", # An optional string indicating the location of the expression for error 2158 # reporting, e.g. a file name and a position in the file. 2159 "title": "A String", # An optional title for the expression, i.e. a short string describing 2160 # its purpose. This can be used e.g. in UIs which allow to enter the 2161 # expression. 2162 }, 2163 }, 2164 ], 2165 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help 2166 # prevent simultaneous updates of a policy from overwriting each other. 2167 # It is strongly suggested that systems make use of the `etag` in the 2168 # read-modify-write cycle to perform policy updates in order to avoid race 2169 # conditions: An `etag` is returned in the response to `getIamPolicy`, and 2170 # systems are expected to put that etag in the request to `setIamPolicy` to 2171 # ensure that their change will be applied to the same version of the policy. 2172 # 2173 # If no `etag` is provided in the call to `setIamPolicy`, then the existing 2174 # policy is overwritten blindly. 2175 "version": 42, # Deprecated. 2176 "auditConfigs": [ # Specifies cloud audit logging configuration for this policy. 2177 { # Specifies the audit configuration for a service. 2178 # The configuration determines which permission types are logged, and what 2179 # identities, if any, are exempted from logging. 2180 # An AuditConfig must have one or more AuditLogConfigs. 2181 # 2182 # If there are AuditConfigs for both `allServices` and a specific service, 2183 # the union of the two AuditConfigs is used for that service: the log_types 2184 # specified in each AuditConfig are enabled, and the exempted_members in each 2185 # AuditLogConfig are exempted. 2186 # 2187 # Example Policy with multiple AuditConfigs: 2188 # 2189 # { 2190 # "audit_configs": [ 2191 # { 2192 # "service": "allServices" 2193 # "audit_log_configs": [ 2194 # { 2195 # "log_type": "DATA_READ", 2196 # "exempted_members": [ 2197 # "user:foo@gmail.com" 2198 # ] 2199 # }, 2200 # { 2201 # "log_type": "DATA_WRITE", 2202 # }, 2203 # { 2204 # "log_type": "ADMIN_READ", 2205 # } 2206 # ] 2207 # }, 2208 # { 2209 # "service": "fooservice.googleapis.com" 2210 # "audit_log_configs": [ 2211 # { 2212 # "log_type": "DATA_READ", 2213 # }, 2214 # { 2215 # "log_type": "DATA_WRITE", 2216 # "exempted_members": [ 2217 # "user:bar@gmail.com" 2218 # ] 2219 # } 2220 # ] 2221 # } 2222 # ] 2223 # } 2224 # 2225 # For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ 2226 # logging. It also exempts foo@gmail.com from DATA_READ logging, and 2227 # bar@gmail.com from DATA_WRITE logging. 2228 "auditLogConfigs": [ # The configuration for logging of each type of permission. 2229 { # Provides the configuration for logging a type of permissions. 2230 # Example: 2231 # 2232 # { 2233 # "audit_log_configs": [ 2234 # { 2235 # "log_type": "DATA_READ", 2236 # "exempted_members": [ 2237 # "user:foo@gmail.com" 2238 # ] 2239 # }, 2240 # { 2241 # "log_type": "DATA_WRITE", 2242 # } 2243 # ] 2244 # } 2245 # 2246 # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting 2247 # foo@gmail.com from DATA_READ logging. 2248 "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of 2249 # permission. 2250 # Follows the same format of Binding.members. 2251 "A String", 2252 ], 2253 "logType": "A String", # The log type that this config enables. 2254 }, 2255 ], 2256 "service": "A String", # Specifies a service that will be enabled for audit logging. 2257 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`. 2258 # `allServices` is a special value that covers all services. 2259 }, 2260 ], 2261 }</pre> 2262</div> 2263 2264<div class="method"> 2265 <code class="details" id="testIamPermissions">testIamPermissions(resource, body, x__xgafv=None)</code> 2266 <pre>Returns permissions that a caller has on the specified resource. 2267If the resource does not exist, this will return an empty set of 2268permissions, not a NOT_FOUND error. 2269 2270Note: This operation is designed to be used for building permission-aware 2271UIs and command-line tools, not for authorization checking. This operation 2272may "fail open" without warning. 2273 2274Args: 2275 resource: string, REQUIRED: The resource for which the policy detail is being requested. 2276See the operation documentation for the appropriate value for this field. (required) 2277 body: object, The request body. (required) 2278 The object takes the form of: 2279 2280{ # Request message for `TestIamPermissions` method. 2281 "permissions": [ # The set of permissions to check for the `resource`. Permissions with 2282 # wildcards (such as '*' or 'storage.*') are not allowed. For more 2283 # information see 2284 # [IAM Overview](https://cloud.google.com/iam/docs/overview#permissions). 2285 "A String", 2286 ], 2287 } 2288 2289 x__xgafv: string, V1 error format. 2290 Allowed values 2291 1 - v1 error format 2292 2 - v2 error format 2293 2294Returns: 2295 An object of the form: 2296 2297 { # Response message for `TestIamPermissions` method. 2298 "permissions": [ # A subset of `TestPermissionsRequest.permissions` that the caller is 2299 # allowed. 2300 "A String", 2301 ], 2302 }</pre> 2303</div> 2304 2305</body></html>