• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5  margin: 0;
6  padding: 0;
7  border: 0;
8  font-weight: inherit;
9  font-style: inherit;
10  font-size: 100%;
11  font-family: inherit;
12  vertical-align: baseline;
13}
14
15body {
16  font-size: 13px;
17  padding: 1em;
18}
19
20h1 {
21  font-size: 26px;
22  margin-bottom: 1em;
23}
24
25h2 {
26  font-size: 24px;
27  margin-bottom: 1em;
28}
29
30h3 {
31  font-size: 20px;
32  margin-bottom: 1em;
33  margin-top: 1em;
34}
35
36pre, code {
37  line-height: 1.5;
38  font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42  margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46  font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50  border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54  margin-top: 0.5em;
55}
56
57.firstline {
58  margin-left: 2 em;
59}
60
61.method  {
62  margin-top: 1em;
63  border: solid 1px #CCC;
64  padding: 1em;
65  background: #EEE;
66}
67
68.details {
69  font-weight: bold;
70  font-size: 14px;
71}
72
73</style>
74
75<h1><a href="spanner_v1.html">Cloud Spanner API</a> . <a href="spanner_v1.projects.html">projects</a> . <a href="spanner_v1.projects.instances.html">instances</a> . <a href="spanner_v1.projects.instances.databases.html">databases</a> . <a href="spanner_v1.projects.instances.databases.sessions.html">sessions</a></h1>
76<h2>Instance Methods</h2>
77<p class="toc_element">
78  <code><a href="#beginTransaction">beginTransaction(session, body, x__xgafv=None)</a></code></p>
79<p class="firstline">Begins a new transaction. This step can often be skipped:</p>
80<p class="toc_element">
81  <code><a href="#commit">commit(session, body, x__xgafv=None)</a></code></p>
82<p class="firstline">Commits a transaction. The request includes the mutations to be</p>
83<p class="toc_element">
84  <code><a href="#create">create(database, body, x__xgafv=None)</a></code></p>
85<p class="firstline">Creates a new session. A session can be used to perform</p>
86<p class="toc_element">
87  <code><a href="#delete">delete(name, x__xgafv=None)</a></code></p>
88<p class="firstline">Ends a session, releasing server resources associated with it. This will</p>
89<p class="toc_element">
90  <code><a href="#executeBatchDml">executeBatchDml(session, body, x__xgafv=None)</a></code></p>
91<p class="firstline">Executes a batch of SQL DML statements. This method allows many statements</p>
92<p class="toc_element">
93  <code><a href="#executeSql">executeSql(session, body, x__xgafv=None)</a></code></p>
94<p class="firstline">Executes an SQL statement, returning all results in a single reply. This</p>
95<p class="toc_element">
96  <code><a href="#executeStreamingSql">executeStreamingSql(session, body, x__xgafv=None)</a></code></p>
97<p class="firstline">Like ExecuteSql, except returns the result</p>
98<p class="toc_element">
99  <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
100<p class="firstline">Gets a session. Returns `NOT_FOUND` if the session does not exist.</p>
101<p class="toc_element">
102  <code><a href="#list">list(database, pageSize=None, filter=None, pageToken=None, x__xgafv=None)</a></code></p>
103<p class="firstline">Lists all sessions in a given database.</p>
104<p class="toc_element">
105  <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
106<p class="firstline">Retrieves the next page of results.</p>
107<p class="toc_element">
108  <code><a href="#partitionQuery">partitionQuery(session, body, x__xgafv=None)</a></code></p>
109<p class="firstline">Creates a set of partition tokens that can be used to execute a query</p>
110<p class="toc_element">
111  <code><a href="#partitionRead">partitionRead(session, body, x__xgafv=None)</a></code></p>
112<p class="firstline">Creates a set of partition tokens that can be used to execute a read</p>
113<p class="toc_element">
114  <code><a href="#read">read(session, body, x__xgafv=None)</a></code></p>
115<p class="firstline">Reads rows from the database using key lookups and scans, as a</p>
116<p class="toc_element">
117  <code><a href="#rollback">rollback(session, body, x__xgafv=None)</a></code></p>
118<p class="firstline">Rolls back a transaction, releasing any locks it holds. It is a good</p>
119<p class="toc_element">
120  <code><a href="#streamingRead">streamingRead(session, body, x__xgafv=None)</a></code></p>
121<p class="firstline">Like Read, except returns the result set as a</p>
122<h3>Method Details</h3>
123<div class="method">
124    <code class="details" id="beginTransaction">beginTransaction(session, body, x__xgafv=None)</code>
125  <pre>Begins a new transaction. This step can often be skipped:
126Read, ExecuteSql and
127Commit can begin a new transaction as a
128side-effect.
129
130Args:
131  session: string, Required. The session in which the transaction runs. (required)
132  body: object, The request body. (required)
133    The object takes the form of:
134
135{ # The request for BeginTransaction.
136    "options": { # # Transactions # Required. Options for the new transaction.
137        #
138        #
139        # Each session can have at most one active transaction at a time. After the
140        # active transaction is completed, the session can immediately be
141        # re-used for the next transaction. It is not necessary to create a
142        # new session for each transaction.
143        #
144        # # Transaction Modes
145        #
146        # Cloud Spanner supports three transaction modes:
147        #
148        #   1. Locking read-write. This type of transaction is the only way
149        #      to write data into Cloud Spanner. These transactions rely on
150        #      pessimistic locking and, if necessary, two-phase commit.
151        #      Locking read-write transactions may abort, requiring the
152        #      application to retry.
153        #
154        #   2. Snapshot read-only. This transaction type provides guaranteed
155        #      consistency across several reads, but does not allow
156        #      writes. Snapshot read-only transactions can be configured to
157        #      read at timestamps in the past. Snapshot read-only
158        #      transactions do not need to be committed.
159        #
160        #   3. Partitioned DML. This type of transaction is used to execute
161        #      a single Partitioned DML statement. Partitioned DML partitions
162        #      the key space and runs the DML statement over each partition
163        #      in parallel using separate, internal transactions that commit
164        #      independently. Partitioned DML transactions do not need to be
165        #      committed.
166        #
167        # For transactions that only read, snapshot read-only transactions
168        # provide simpler semantics and are almost always faster. In
169        # particular, read-only transactions do not take locks, so they do
170        # not conflict with read-write transactions. As a consequence of not
171        # taking locks, they also do not abort, so retry loops are not needed.
172        #
173        # Transactions may only read/write data in a single database. They
174        # may, however, read/write data in different tables within that
175        # database.
176        #
177        # ## Locking Read-Write Transactions
178        #
179        # Locking transactions may be used to atomically read-modify-write
180        # data anywhere in a database. This type of transaction is externally
181        # consistent.
182        #
183        # Clients should attempt to minimize the amount of time a transaction
184        # is active. Faster transactions commit with higher probability
185        # and cause less contention. Cloud Spanner attempts to keep read locks
186        # active as long as the transaction continues to do reads, and the
187        # transaction has not been terminated by
188        # Commit or
189        # Rollback.  Long periods of
190        # inactivity at the client may cause Cloud Spanner to release a
191        # transaction's locks and abort it.
192        #
193        # Conceptually, a read-write transaction consists of zero or more
194        # reads or SQL statements followed by
195        # Commit. At any time before
196        # Commit, the client can send a
197        # Rollback request to abort the
198        # transaction.
199        #
200        # ### Semantics
201        #
202        # Cloud Spanner can commit the transaction if all read locks it acquired
203        # are still valid at commit time, and it is able to acquire write
204        # locks for all writes. Cloud Spanner can abort the transaction for any
205        # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
206        # that the transaction has not modified any user data in Cloud Spanner.
207        #
208        # Unless the transaction commits, Cloud Spanner makes no guarantees about
209        # how long the transaction's locks were held for. It is an error to
210        # use Cloud Spanner locks for any sort of mutual exclusion other than
211        # between Cloud Spanner transactions themselves.
212        #
213        # ### Retrying Aborted Transactions
214        #
215        # When a transaction aborts, the application can choose to retry the
216        # whole transaction again. To maximize the chances of successfully
217        # committing the retry, the client should execute the retry in the
218        # same session as the original attempt. The original session's lock
219        # priority increases with each consecutive abort, meaning that each
220        # attempt has a slightly better chance of success than the previous.
221        #
222        # Under some circumstances (e.g., many transactions attempting to
223        # modify the same row(s)), a transaction can abort many times in a
224        # short period before successfully committing. Thus, it is not a good
225        # idea to cap the number of retries a transaction can attempt;
226        # instead, it is better to limit the total amount of wall time spent
227        # retrying.
228        #
229        # ### Idle Transactions
230        #
231        # A transaction is considered idle if it has no outstanding reads or
232        # SQL queries and has not started a read or SQL query within the last 10
233        # seconds. Idle transactions can be aborted by Cloud Spanner so that they
234        # don't hold on to locks indefinitely. In that case, the commit will
235        # fail with error `ABORTED`.
236        #
237        # If this behavior is undesirable, periodically executing a simple
238        # SQL query in the transaction (e.g., `SELECT 1`) prevents the
239        # transaction from becoming idle.
240        #
241        # ## Snapshot Read-Only Transactions
242        #
243        # Snapshot read-only transactions provides a simpler method than
244        # locking read-write transactions for doing several consistent
245        # reads. However, this type of transaction does not support writes.
246        #
247        # Snapshot transactions do not take locks. Instead, they work by
248        # choosing a Cloud Spanner timestamp, then executing all reads at that
249        # timestamp. Since they do not acquire locks, they do not block
250        # concurrent read-write transactions.
251        #
252        # Unlike locking read-write transactions, snapshot read-only
253        # transactions never abort. They can fail if the chosen read
254        # timestamp is garbage collected; however, the default garbage
255        # collection policy is generous enough that most applications do not
256        # need to worry about this in practice.
257        #
258        # Snapshot read-only transactions do not need to call
259        # Commit or
260        # Rollback (and in fact are not
261        # permitted to do so).
262        #
263        # To execute a snapshot transaction, the client specifies a timestamp
264        # bound, which tells Cloud Spanner how to choose a read timestamp.
265        #
266        # The types of timestamp bound are:
267        #
268        #   - Strong (the default).
269        #   - Bounded staleness.
270        #   - Exact staleness.
271        #
272        # If the Cloud Spanner database to be read is geographically distributed,
273        # stale read-only transactions can execute more quickly than strong
274        # or read-write transaction, because they are able to execute far
275        # from the leader replica.
276        #
277        # Each type of timestamp bound is discussed in detail below.
278        #
279        # ### Strong
280        #
281        # Strong reads are guaranteed to see the effects of all transactions
282        # that have committed before the start of the read. Furthermore, all
283        # rows yielded by a single read are consistent with each other -- if
284        # any part of the read observes a transaction, all parts of the read
285        # see the transaction.
286        #
287        # Strong reads are not repeatable: two consecutive strong read-only
288        # transactions might return inconsistent results if there are
289        # concurrent writes. If consistency across reads is required, the
290        # reads should be executed within a transaction or at an exact read
291        # timestamp.
292        #
293        # See TransactionOptions.ReadOnly.strong.
294        #
295        # ### Exact Staleness
296        #
297        # These timestamp bounds execute reads at a user-specified
298        # timestamp. Reads at a timestamp are guaranteed to see a consistent
299        # prefix of the global transaction history: they observe
300        # modifications done by all transactions with a commit timestamp <=
301        # the read timestamp, and observe none of the modifications done by
302        # transactions with a larger commit timestamp. They will block until
303        # all conflicting transactions that may be assigned commit timestamps
304        # <= the read timestamp have finished.
305        #
306        # The timestamp can either be expressed as an absolute Cloud Spanner commit
307        # timestamp or a staleness relative to the current time.
308        #
309        # These modes do not require a "negotiation phase" to pick a
310        # timestamp. As a result, they execute slightly faster than the
311        # equivalent boundedly stale concurrency modes. On the other hand,
312        # boundedly stale reads usually return fresher results.
313        #
314        # See TransactionOptions.ReadOnly.read_timestamp and
315        # TransactionOptions.ReadOnly.exact_staleness.
316        #
317        # ### Bounded Staleness
318        #
319        # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
320        # subject to a user-provided staleness bound. Cloud Spanner chooses the
321        # newest timestamp within the staleness bound that allows execution
322        # of the reads at the closest available replica without blocking.
323        #
324        # All rows yielded are consistent with each other -- if any part of
325        # the read observes a transaction, all parts of the read see the
326        # transaction. Boundedly stale reads are not repeatable: two stale
327        # reads, even if they use the same staleness bound, can execute at
328        # different timestamps and thus return inconsistent results.
329        #
330        # Boundedly stale reads execute in two phases: the first phase
331        # negotiates a timestamp among all replicas needed to serve the
332        # read. In the second phase, reads are executed at the negotiated
333        # timestamp.
334        #
335        # As a result of the two phase execution, bounded staleness reads are
336        # usually a little slower than comparable exact staleness
337        # reads. However, they are typically able to return fresher
338        # results, and are more likely to execute at the closest replica.
339        #
340        # Because the timestamp negotiation requires up-front knowledge of
341        # which rows will be read, it can only be used with single-use
342        # read-only transactions.
343        #
344        # See TransactionOptions.ReadOnly.max_staleness and
345        # TransactionOptions.ReadOnly.min_read_timestamp.
346        #
347        # ### Old Read Timestamps and Garbage Collection
348        #
349        # Cloud Spanner continuously garbage collects deleted and overwritten data
350        # in the background to reclaim storage space. This process is known
351        # as "version GC". By default, version GC reclaims versions after they
352        # are one hour old. Because of this, Cloud Spanner cannot perform reads
353        # at read timestamps more than one hour in the past. This
354        # restriction also applies to in-progress reads and/or SQL queries whose
355        # timestamp become too old while executing. Reads and SQL queries with
356        # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
357        #
358        # ## Partitioned DML Transactions
359        #
360        # Partitioned DML transactions are used to execute DML statements with a
361        # different execution strategy that provides different, and often better,
362        # scalability properties for large, table-wide operations than DML in a
363        # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
364        # should prefer using ReadWrite transactions.
365        #
366        # Partitioned DML partitions the keyspace and runs the DML statement on each
367        # partition in separate, internal transactions. These transactions commit
368        # automatically when complete, and run independently from one another.
369        #
370        # To reduce lock contention, this execution strategy only acquires read locks
371        # on rows that match the WHERE clause of the statement. Additionally, the
372        # smaller per-partition transactions hold locks for less time.
373        #
374        # That said, Partitioned DML is not a drop-in replacement for standard DML used
375        # in ReadWrite transactions.
376        #
377        #  - The DML statement must be fully-partitionable. Specifically, the statement
378        #    must be expressible as the union of many statements which each access only
379        #    a single row of the table.
380        #
381        #  - The statement is not applied atomically to all rows of the table. Rather,
382        #    the statement is applied atomically to partitions of the table, in
383        #    independent transactions. Secondary index rows are updated atomically
384        #    with the base table rows.
385        #
386        #  - Partitioned DML does not guarantee exactly-once execution semantics
387        #    against a partition. The statement will be applied at least once to each
388        #    partition. It is strongly recommended that the DML statement should be
389        #    idempotent to avoid unexpected results. For instance, it is potentially
390        #    dangerous to run a statement such as
391        #    `UPDATE table SET column = column + 1` as it could be run multiple times
392        #    against some rows.
393        #
394        #  - The partitions are committed automatically - there is no support for
395        #    Commit or Rollback. If the call returns an error, or if the client issuing
396        #    the ExecuteSql call dies, it is possible that some rows had the statement
397        #    executed on them successfully. It is also possible that statement was
398        #    never executed against other rows.
399        #
400        #  - Partitioned DML transactions may only contain the execution of a single
401        #    DML statement via ExecuteSql or ExecuteStreamingSql.
402        #
403        #  - If any error is encountered during the execution of the partitioned DML
404        #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
405        #    value that cannot be stored due to schema constraints), then the
406        #    operation is stopped at that point and an error is returned. It is
407        #    possible that at this point, some partitions have been committed (or even
408        #    committed multiple times), and other partitions have not been run at all.
409        #
410        # Given the above, Partitioned DML is good fit for large, database-wide,
411        # operations that are idempotent, such as deleting old rows from a very large
412        # table.
413      "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
414          #
415          # Authorization to begin a read-write transaction requires
416          # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
417          # on the `session` resource.
418          # transaction type has no options.
419      },
420      "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
421          #
422          # Authorization to begin a read-only transaction requires
423          # `spanner.databases.beginReadOnlyTransaction` permission
424          # on the `session` resource.
425        "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
426            #
427            # This is useful for requesting fresher data than some previous
428            # read, or data that is fresh enough to observe the effects of some
429            # previously committed transaction whose timestamp is known.
430            #
431            # Note that this option can only be used in single-use transactions.
432            #
433            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
434            # Example: `"2014-10-02T15:01:23.045123456Z"`.
435        "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
436            # the Transaction message that describes the transaction.
437        "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
438            # seconds. Guarantees that all writes that have committed more
439            # than the specified number of seconds ago are visible. Because
440            # Cloud Spanner chooses the exact timestamp, this mode works even if
441            # the client's local clock is substantially skewed from Cloud Spanner
442            # commit timestamps.
443            #
444            # Useful for reading the freshest data available at a nearby
445            # replica, while bounding the possible staleness if the local
446            # replica has fallen behind.
447            #
448            # Note that this option can only be used in single-use
449            # transactions.
450        "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
451            # old. The timestamp is chosen soon after the read is started.
452            #
453            # Guarantees that all writes that have committed more than the
454            # specified number of seconds ago are visible. Because Cloud Spanner
455            # chooses the exact timestamp, this mode works even if the client's
456            # local clock is substantially skewed from Cloud Spanner commit
457            # timestamps.
458            #
459            # Useful for reading at nearby replicas without the distributed
460            # timestamp negotiation overhead of `max_staleness`.
461        "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
462            # reads at a specific timestamp are repeatable; the same read at
463            # the same timestamp always returns the same data. If the
464            # timestamp is in the future, the read will block until the
465            # specified timestamp, modulo the read's deadline.
466            #
467            # Useful for large scale consistent reads such as mapreduces, or
468            # for coordinating many reads against a consistent snapshot of the
469            # data.
470            #
471            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
472            # Example: `"2014-10-02T15:01:23.045123456Z"`.
473        "strong": True or False, # Read at a timestamp where all previously committed transactions
474            # are visible.
475      },
476      "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
477          #
478          # Authorization to begin a Partitioned DML transaction requires
479          # `spanner.databases.beginPartitionedDmlTransaction` permission
480          # on the `session` resource.
481      },
482    },
483  }
484
485  x__xgafv: string, V1 error format.
486    Allowed values
487      1 - v1 error format
488      2 - v2 error format
489
490Returns:
491  An object of the form:
492
493    { # A transaction.
494    "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
495        # for the transaction. Not returned by default: see
496        # TransactionOptions.ReadOnly.return_read_timestamp.
497        #
498        # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
499        # Example: `"2014-10-02T15:01:23.045123456Z"`.
500    "id": "A String", # `id` may be used to identify the transaction in subsequent
501        # Read,
502        # ExecuteSql,
503        # Commit, or
504        # Rollback calls.
505        #
506        # Single-use read-only transactions do not have IDs, because
507        # single-use transactions do not support multiple requests.
508  }</pre>
509</div>
510
511<div class="method">
512    <code class="details" id="commit">commit(session, body, x__xgafv=None)</code>
513  <pre>Commits a transaction. The request includes the mutations to be
514applied to rows in the database.
515
516`Commit` might return an `ABORTED` error. This can occur at any time;
517commonly, the cause is conflicts with concurrent
518transactions. However, it can also happen for a variety of other
519reasons. If `Commit` returns `ABORTED`, the caller should re-attempt
520the transaction from the beginning, re-using the same session.
521
522Args:
523  session: string, Required. The session in which the transaction to be committed is running. (required)
524  body: object, The request body. (required)
525    The object takes the form of:
526
527{ # The request for Commit.
528    "transactionId": "A String", # Commit a previously-started transaction.
529    "mutations": [ # The mutations to be executed when this transaction commits. All
530        # mutations are applied atomically, in the order they appear in
531        # this list.
532      { # A modification to one or more Cloud Spanner rows.  Mutations can be
533          # applied to a Cloud Spanner database by sending them in a
534          # Commit call.
535        "insert": { # Arguments to insert, update, insert_or_update, and # Insert new rows in a table. If any of the rows already exist,
536            # the write or transaction fails with error `ALREADY_EXISTS`.
537            # replace operations.
538          "table": "A String", # Required. The table whose rows will be written.
539          "values": [ # The values to be written. `values` can contain more than one
540              # list of values. If it does, then multiple rows are written, one
541              # for each entry in `values`. Each list in `values` must have
542              # exactly as many entries as there are entries in columns
543              # above. Sending multiple lists is equivalent to sending multiple
544              # `Mutation`s, each containing one `values` entry and repeating
545              # table and columns. Individual values in each list are
546              # encoded as described here.
547            [
548              "",
549            ],
550          ],
551          "columns": [ # The names of the columns in table to be written.
552              #
553              # The list of columns must contain enough columns to allow
554              # Cloud Spanner to derive values for all primary key columns in the
555              # row(s) to be modified.
556            "A String",
557          ],
558        },
559        "replace": { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, it is
560            # deleted, and the column values provided are inserted
561            # instead. Unlike insert_or_update, this means any values not
562            # explicitly written become `NULL`.
563            # replace operations.
564          "table": "A String", # Required. The table whose rows will be written.
565          "values": [ # The values to be written. `values` can contain more than one
566              # list of values. If it does, then multiple rows are written, one
567              # for each entry in `values`. Each list in `values` must have
568              # exactly as many entries as there are entries in columns
569              # above. Sending multiple lists is equivalent to sending multiple
570              # `Mutation`s, each containing one `values` entry and repeating
571              # table and columns. Individual values in each list are
572              # encoded as described here.
573            [
574              "",
575            ],
576          ],
577          "columns": [ # The names of the columns in table to be written.
578              #
579              # The list of columns must contain enough columns to allow
580              # Cloud Spanner to derive values for all primary key columns in the
581              # row(s) to be modified.
582            "A String",
583          ],
584        },
585        "insertOrUpdate": { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, then
586            # its column values are overwritten with the ones provided. Any
587            # column values not explicitly written are preserved.
588            # replace operations.
589          "table": "A String", # Required. The table whose rows will be written.
590          "values": [ # The values to be written. `values` can contain more than one
591              # list of values. If it does, then multiple rows are written, one
592              # for each entry in `values`. Each list in `values` must have
593              # exactly as many entries as there are entries in columns
594              # above. Sending multiple lists is equivalent to sending multiple
595              # `Mutation`s, each containing one `values` entry and repeating
596              # table and columns. Individual values in each list are
597              # encoded as described here.
598            [
599              "",
600            ],
601          ],
602          "columns": [ # The names of the columns in table to be written.
603              #
604              # The list of columns must contain enough columns to allow
605              # Cloud Spanner to derive values for all primary key columns in the
606              # row(s) to be modified.
607            "A String",
608          ],
609        },
610        "update": { # Arguments to insert, update, insert_or_update, and # Update existing rows in a table. If any of the rows does not
611            # already exist, the transaction fails with error `NOT_FOUND`.
612            # replace operations.
613          "table": "A String", # Required. The table whose rows will be written.
614          "values": [ # The values to be written. `values` can contain more than one
615              # list of values. If it does, then multiple rows are written, one
616              # for each entry in `values`. Each list in `values` must have
617              # exactly as many entries as there are entries in columns
618              # above. Sending multiple lists is equivalent to sending multiple
619              # `Mutation`s, each containing one `values` entry and repeating
620              # table and columns. Individual values in each list are
621              # encoded as described here.
622            [
623              "",
624            ],
625          ],
626          "columns": [ # The names of the columns in table to be written.
627              #
628              # The list of columns must contain enough columns to allow
629              # Cloud Spanner to derive values for all primary key columns in the
630              # row(s) to be modified.
631            "A String",
632          ],
633        },
634        "delete": { # Arguments to delete operations. # Delete rows from a table. Succeeds whether or not the named
635            # rows were present.
636          "table": "A String", # Required. The table whose rows will be deleted.
637          "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. The primary keys of the rows within table to delete.
638              # Delete is idempotent. The transaction will succeed even if some or all
639              # rows do not exist.
640              # the keys are expected to be in the same table or index. The keys need
641              # not be sorted in any particular way.
642              #
643              # If the same key is specified multiple times in the set (for example
644              # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
645              # behaves as if the key were only specified once.
646            "ranges": [ # A list of key ranges. See KeyRange for more information about
647                # key range specifications.
648              { # KeyRange represents a range of rows in a table or index.
649                  #
650                  # A range has a start key and an end key. These keys can be open or
651                  # closed, indicating if the range includes rows with that key.
652                  #
653                  # Keys are represented by lists, where the ith value in the list
654                  # corresponds to the ith component of the table or index primary key.
655                  # Individual values are encoded as described
656                  # here.
657                  #
658                  # For example, consider the following table definition:
659                  #
660                  #     CREATE TABLE UserEvents (
661                  #       UserName STRING(MAX),
662                  #       EventDate STRING(10)
663                  #     ) PRIMARY KEY(UserName, EventDate);
664                  #
665                  # The following keys name rows in this table:
666                  #
667                  #     "Bob", "2014-09-23"
668                  #
669                  # Since the `UserEvents` table's `PRIMARY KEY` clause names two
670                  # columns, each `UserEvents` key has two elements; the first is the
671                  # `UserName`, and the second is the `EventDate`.
672                  #
673                  # Key ranges with multiple components are interpreted
674                  # lexicographically by component using the table or index key's declared
675                  # sort order. For example, the following range returns all events for
676                  # user `"Bob"` that occurred in the year 2015:
677                  #
678                  #     "start_closed": ["Bob", "2015-01-01"]
679                  #     "end_closed": ["Bob", "2015-12-31"]
680                  #
681                  # Start and end keys can omit trailing key components. This affects the
682                  # inclusion and exclusion of rows that exactly match the provided key
683                  # components: if the key is closed, then rows that exactly match the
684                  # provided components are included; if the key is open, then rows
685                  # that exactly match are not included.
686                  #
687                  # For example, the following range includes all events for `"Bob"` that
688                  # occurred during and after the year 2000:
689                  #
690                  #     "start_closed": ["Bob", "2000-01-01"]
691                  #     "end_closed": ["Bob"]
692                  #
693                  # The next example retrieves all events for `"Bob"`:
694                  #
695                  #     "start_closed": ["Bob"]
696                  #     "end_closed": ["Bob"]
697                  #
698                  # To retrieve events before the year 2000:
699                  #
700                  #     "start_closed": ["Bob"]
701                  #     "end_open": ["Bob", "2000-01-01"]
702                  #
703                  # The following range includes all rows in the table:
704                  #
705                  #     "start_closed": []
706                  #     "end_closed": []
707                  #
708                  # This range returns all users whose `UserName` begins with any
709                  # character from A to C:
710                  #
711                  #     "start_closed": ["A"]
712                  #     "end_open": ["D"]
713                  #
714                  # This range returns all users whose `UserName` begins with B:
715                  #
716                  #     "start_closed": ["B"]
717                  #     "end_open": ["C"]
718                  #
719                  # Key ranges honor column sort order. For example, suppose a table is
720                  # defined as follows:
721                  #
722                  #     CREATE TABLE DescendingSortedTable {
723                  #       Key INT64,
724                  #       ...
725                  #     ) PRIMARY KEY(Key DESC);
726                  #
727                  # The following range retrieves all rows with key values between 1
728                  # and 100 inclusive:
729                  #
730                  #     "start_closed": ["100"]
731                  #     "end_closed": ["1"]
732                  #
733                  # Note that 100 is passed as the start, and 1 is passed as the end,
734                  # because `Key` is a descending column in the schema.
735                "endOpen": [ # If the end is open, then the range excludes rows whose first
736                    # `len(end_open)` key columns exactly match `end_open`.
737                  "",
738                ],
739                "startOpen": [ # If the start is open, then the range excludes rows whose first
740                    # `len(start_open)` key columns exactly match `start_open`.
741                  "",
742                ],
743                "endClosed": [ # If the end is closed, then the range includes all rows whose
744                    # first `len(end_closed)` key columns exactly match `end_closed`.
745                  "",
746                ],
747                "startClosed": [ # If the start is closed, then the range includes all rows whose
748                    # first `len(start_closed)` key columns exactly match `start_closed`.
749                  "",
750                ],
751              },
752            ],
753            "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
754                # many elements as there are columns in the primary or index key
755                # with which this `KeySet` is used.  Individual key values are
756                # encoded as described here.
757              [
758                "",
759              ],
760            ],
761            "all": True or False, # For convenience `all` can be set to `true` to indicate that this
762                # `KeySet` matches all keys in the table or index. Note that any keys
763                # specified in `keys` or `ranges` are only yielded once.
764          },
765        },
766      },
767    ],
768    "singleUseTransaction": { # # Transactions # Execute mutations in a temporary transaction. Note that unlike
769        # commit of a previously-started transaction, commit with a
770        # temporary transaction is non-idempotent. That is, if the
771        # `CommitRequest` is sent to Cloud Spanner more than once (for
772        # instance, due to retries in the application, or in the
773        # transport library), it is possible that the mutations are
774        # executed more than once. If this is undesirable, use
775        # BeginTransaction and
776        # Commit instead.
777        #
778        #
779        # Each session can have at most one active transaction at a time. After the
780        # active transaction is completed, the session can immediately be
781        # re-used for the next transaction. It is not necessary to create a
782        # new session for each transaction.
783        #
784        # # Transaction Modes
785        #
786        # Cloud Spanner supports three transaction modes:
787        #
788        #   1. Locking read-write. This type of transaction is the only way
789        #      to write data into Cloud Spanner. These transactions rely on
790        #      pessimistic locking and, if necessary, two-phase commit.
791        #      Locking read-write transactions may abort, requiring the
792        #      application to retry.
793        #
794        #   2. Snapshot read-only. This transaction type provides guaranteed
795        #      consistency across several reads, but does not allow
796        #      writes. Snapshot read-only transactions can be configured to
797        #      read at timestamps in the past. Snapshot read-only
798        #      transactions do not need to be committed.
799        #
800        #   3. Partitioned DML. This type of transaction is used to execute
801        #      a single Partitioned DML statement. Partitioned DML partitions
802        #      the key space and runs the DML statement over each partition
803        #      in parallel using separate, internal transactions that commit
804        #      independently. Partitioned DML transactions do not need to be
805        #      committed.
806        #
807        # For transactions that only read, snapshot read-only transactions
808        # provide simpler semantics and are almost always faster. In
809        # particular, read-only transactions do not take locks, so they do
810        # not conflict with read-write transactions. As a consequence of not
811        # taking locks, they also do not abort, so retry loops are not needed.
812        #
813        # Transactions may only read/write data in a single database. They
814        # may, however, read/write data in different tables within that
815        # database.
816        #
817        # ## Locking Read-Write Transactions
818        #
819        # Locking transactions may be used to atomically read-modify-write
820        # data anywhere in a database. This type of transaction is externally
821        # consistent.
822        #
823        # Clients should attempt to minimize the amount of time a transaction
824        # is active. Faster transactions commit with higher probability
825        # and cause less contention. Cloud Spanner attempts to keep read locks
826        # active as long as the transaction continues to do reads, and the
827        # transaction has not been terminated by
828        # Commit or
829        # Rollback.  Long periods of
830        # inactivity at the client may cause Cloud Spanner to release a
831        # transaction's locks and abort it.
832        #
833        # Conceptually, a read-write transaction consists of zero or more
834        # reads or SQL statements followed by
835        # Commit. At any time before
836        # Commit, the client can send a
837        # Rollback request to abort the
838        # transaction.
839        #
840        # ### Semantics
841        #
842        # Cloud Spanner can commit the transaction if all read locks it acquired
843        # are still valid at commit time, and it is able to acquire write
844        # locks for all writes. Cloud Spanner can abort the transaction for any
845        # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
846        # that the transaction has not modified any user data in Cloud Spanner.
847        #
848        # Unless the transaction commits, Cloud Spanner makes no guarantees about
849        # how long the transaction's locks were held for. It is an error to
850        # use Cloud Spanner locks for any sort of mutual exclusion other than
851        # between Cloud Spanner transactions themselves.
852        #
853        # ### Retrying Aborted Transactions
854        #
855        # When a transaction aborts, the application can choose to retry the
856        # whole transaction again. To maximize the chances of successfully
857        # committing the retry, the client should execute the retry in the
858        # same session as the original attempt. The original session's lock
859        # priority increases with each consecutive abort, meaning that each
860        # attempt has a slightly better chance of success than the previous.
861        #
862        # Under some circumstances (e.g., many transactions attempting to
863        # modify the same row(s)), a transaction can abort many times in a
864        # short period before successfully committing. Thus, it is not a good
865        # idea to cap the number of retries a transaction can attempt;
866        # instead, it is better to limit the total amount of wall time spent
867        # retrying.
868        #
869        # ### Idle Transactions
870        #
871        # A transaction is considered idle if it has no outstanding reads or
872        # SQL queries and has not started a read or SQL query within the last 10
873        # seconds. Idle transactions can be aborted by Cloud Spanner so that they
874        # don't hold on to locks indefinitely. In that case, the commit will
875        # fail with error `ABORTED`.
876        #
877        # If this behavior is undesirable, periodically executing a simple
878        # SQL query in the transaction (e.g., `SELECT 1`) prevents the
879        # transaction from becoming idle.
880        #
881        # ## Snapshot Read-Only Transactions
882        #
883        # Snapshot read-only transactions provides a simpler method than
884        # locking read-write transactions for doing several consistent
885        # reads. However, this type of transaction does not support writes.
886        #
887        # Snapshot transactions do not take locks. Instead, they work by
888        # choosing a Cloud Spanner timestamp, then executing all reads at that
889        # timestamp. Since they do not acquire locks, they do not block
890        # concurrent read-write transactions.
891        #
892        # Unlike locking read-write transactions, snapshot read-only
893        # transactions never abort. They can fail if the chosen read
894        # timestamp is garbage collected; however, the default garbage
895        # collection policy is generous enough that most applications do not
896        # need to worry about this in practice.
897        #
898        # Snapshot read-only transactions do not need to call
899        # Commit or
900        # Rollback (and in fact are not
901        # permitted to do so).
902        #
903        # To execute a snapshot transaction, the client specifies a timestamp
904        # bound, which tells Cloud Spanner how to choose a read timestamp.
905        #
906        # The types of timestamp bound are:
907        #
908        #   - Strong (the default).
909        #   - Bounded staleness.
910        #   - Exact staleness.
911        #
912        # If the Cloud Spanner database to be read is geographically distributed,
913        # stale read-only transactions can execute more quickly than strong
914        # or read-write transaction, because they are able to execute far
915        # from the leader replica.
916        #
917        # Each type of timestamp bound is discussed in detail below.
918        #
919        # ### Strong
920        #
921        # Strong reads are guaranteed to see the effects of all transactions
922        # that have committed before the start of the read. Furthermore, all
923        # rows yielded by a single read are consistent with each other -- if
924        # any part of the read observes a transaction, all parts of the read
925        # see the transaction.
926        #
927        # Strong reads are not repeatable: two consecutive strong read-only
928        # transactions might return inconsistent results if there are
929        # concurrent writes. If consistency across reads is required, the
930        # reads should be executed within a transaction or at an exact read
931        # timestamp.
932        #
933        # See TransactionOptions.ReadOnly.strong.
934        #
935        # ### Exact Staleness
936        #
937        # These timestamp bounds execute reads at a user-specified
938        # timestamp. Reads at a timestamp are guaranteed to see a consistent
939        # prefix of the global transaction history: they observe
940        # modifications done by all transactions with a commit timestamp <=
941        # the read timestamp, and observe none of the modifications done by
942        # transactions with a larger commit timestamp. They will block until
943        # all conflicting transactions that may be assigned commit timestamps
944        # <= the read timestamp have finished.
945        #
946        # The timestamp can either be expressed as an absolute Cloud Spanner commit
947        # timestamp or a staleness relative to the current time.
948        #
949        # These modes do not require a "negotiation phase" to pick a
950        # timestamp. As a result, they execute slightly faster than the
951        # equivalent boundedly stale concurrency modes. On the other hand,
952        # boundedly stale reads usually return fresher results.
953        #
954        # See TransactionOptions.ReadOnly.read_timestamp and
955        # TransactionOptions.ReadOnly.exact_staleness.
956        #
957        # ### Bounded Staleness
958        #
959        # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
960        # subject to a user-provided staleness bound. Cloud Spanner chooses the
961        # newest timestamp within the staleness bound that allows execution
962        # of the reads at the closest available replica without blocking.
963        #
964        # All rows yielded are consistent with each other -- if any part of
965        # the read observes a transaction, all parts of the read see the
966        # transaction. Boundedly stale reads are not repeatable: two stale
967        # reads, even if they use the same staleness bound, can execute at
968        # different timestamps and thus return inconsistent results.
969        #
970        # Boundedly stale reads execute in two phases: the first phase
971        # negotiates a timestamp among all replicas needed to serve the
972        # read. In the second phase, reads are executed at the negotiated
973        # timestamp.
974        #
975        # As a result of the two phase execution, bounded staleness reads are
976        # usually a little slower than comparable exact staleness
977        # reads. However, they are typically able to return fresher
978        # results, and are more likely to execute at the closest replica.
979        #
980        # Because the timestamp negotiation requires up-front knowledge of
981        # which rows will be read, it can only be used with single-use
982        # read-only transactions.
983        #
984        # See TransactionOptions.ReadOnly.max_staleness and
985        # TransactionOptions.ReadOnly.min_read_timestamp.
986        #
987        # ### Old Read Timestamps and Garbage Collection
988        #
989        # Cloud Spanner continuously garbage collects deleted and overwritten data
990        # in the background to reclaim storage space. This process is known
991        # as "version GC". By default, version GC reclaims versions after they
992        # are one hour old. Because of this, Cloud Spanner cannot perform reads
993        # at read timestamps more than one hour in the past. This
994        # restriction also applies to in-progress reads and/or SQL queries whose
995        # timestamp become too old while executing. Reads and SQL queries with
996        # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
997        #
998        # ## Partitioned DML Transactions
999        #
1000        # Partitioned DML transactions are used to execute DML statements with a
1001        # different execution strategy that provides different, and often better,
1002        # scalability properties for large, table-wide operations than DML in a
1003        # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
1004        # should prefer using ReadWrite transactions.
1005        #
1006        # Partitioned DML partitions the keyspace and runs the DML statement on each
1007        # partition in separate, internal transactions. These transactions commit
1008        # automatically when complete, and run independently from one another.
1009        #
1010        # To reduce lock contention, this execution strategy only acquires read locks
1011        # on rows that match the WHERE clause of the statement. Additionally, the
1012        # smaller per-partition transactions hold locks for less time.
1013        #
1014        # That said, Partitioned DML is not a drop-in replacement for standard DML used
1015        # in ReadWrite transactions.
1016        #
1017        #  - The DML statement must be fully-partitionable. Specifically, the statement
1018        #    must be expressible as the union of many statements which each access only
1019        #    a single row of the table.
1020        #
1021        #  - The statement is not applied atomically to all rows of the table. Rather,
1022        #    the statement is applied atomically to partitions of the table, in
1023        #    independent transactions. Secondary index rows are updated atomically
1024        #    with the base table rows.
1025        #
1026        #  - Partitioned DML does not guarantee exactly-once execution semantics
1027        #    against a partition. The statement will be applied at least once to each
1028        #    partition. It is strongly recommended that the DML statement should be
1029        #    idempotent to avoid unexpected results. For instance, it is potentially
1030        #    dangerous to run a statement such as
1031        #    `UPDATE table SET column = column + 1` as it could be run multiple times
1032        #    against some rows.
1033        #
1034        #  - The partitions are committed automatically - there is no support for
1035        #    Commit or Rollback. If the call returns an error, or if the client issuing
1036        #    the ExecuteSql call dies, it is possible that some rows had the statement
1037        #    executed on them successfully. It is also possible that statement was
1038        #    never executed against other rows.
1039        #
1040        #  - Partitioned DML transactions may only contain the execution of a single
1041        #    DML statement via ExecuteSql or ExecuteStreamingSql.
1042        #
1043        #  - If any error is encountered during the execution of the partitioned DML
1044        #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
1045        #    value that cannot be stored due to schema constraints), then the
1046        #    operation is stopped at that point and an error is returned. It is
1047        #    possible that at this point, some partitions have been committed (or even
1048        #    committed multiple times), and other partitions have not been run at all.
1049        #
1050        # Given the above, Partitioned DML is good fit for large, database-wide,
1051        # operations that are idempotent, such as deleting old rows from a very large
1052        # table.
1053      "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
1054          #
1055          # Authorization to begin a read-write transaction requires
1056          # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
1057          # on the `session` resource.
1058          # transaction type has no options.
1059      },
1060      "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
1061          #
1062          # Authorization to begin a read-only transaction requires
1063          # `spanner.databases.beginReadOnlyTransaction` permission
1064          # on the `session` resource.
1065        "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
1066            #
1067            # This is useful for requesting fresher data than some previous
1068            # read, or data that is fresh enough to observe the effects of some
1069            # previously committed transaction whose timestamp is known.
1070            #
1071            # Note that this option can only be used in single-use transactions.
1072            #
1073            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
1074            # Example: `"2014-10-02T15:01:23.045123456Z"`.
1075        "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
1076            # the Transaction message that describes the transaction.
1077        "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
1078            # seconds. Guarantees that all writes that have committed more
1079            # than the specified number of seconds ago are visible. Because
1080            # Cloud Spanner chooses the exact timestamp, this mode works even if
1081            # the client's local clock is substantially skewed from Cloud Spanner
1082            # commit timestamps.
1083            #
1084            # Useful for reading the freshest data available at a nearby
1085            # replica, while bounding the possible staleness if the local
1086            # replica has fallen behind.
1087            #
1088            # Note that this option can only be used in single-use
1089            # transactions.
1090        "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
1091            # old. The timestamp is chosen soon after the read is started.
1092            #
1093            # Guarantees that all writes that have committed more than the
1094            # specified number of seconds ago are visible. Because Cloud Spanner
1095            # chooses the exact timestamp, this mode works even if the client's
1096            # local clock is substantially skewed from Cloud Spanner commit
1097            # timestamps.
1098            #
1099            # Useful for reading at nearby replicas without the distributed
1100            # timestamp negotiation overhead of `max_staleness`.
1101        "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
1102            # reads at a specific timestamp are repeatable; the same read at
1103            # the same timestamp always returns the same data. If the
1104            # timestamp is in the future, the read will block until the
1105            # specified timestamp, modulo the read's deadline.
1106            #
1107            # Useful for large scale consistent reads such as mapreduces, or
1108            # for coordinating many reads against a consistent snapshot of the
1109            # data.
1110            #
1111            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
1112            # Example: `"2014-10-02T15:01:23.045123456Z"`.
1113        "strong": True or False, # Read at a timestamp where all previously committed transactions
1114            # are visible.
1115      },
1116      "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
1117          #
1118          # Authorization to begin a Partitioned DML transaction requires
1119          # `spanner.databases.beginPartitionedDmlTransaction` permission
1120          # on the `session` resource.
1121      },
1122    },
1123  }
1124
1125  x__xgafv: string, V1 error format.
1126    Allowed values
1127      1 - v1 error format
1128      2 - v2 error format
1129
1130Returns:
1131  An object of the form:
1132
1133    { # The response for Commit.
1134    "commitTimestamp": "A String", # The Cloud Spanner timestamp at which the transaction committed.
1135  }</pre>
1136</div>
1137
1138<div class="method">
1139    <code class="details" id="create">create(database, body, x__xgafv=None)</code>
1140  <pre>Creates a new session. A session can be used to perform
1141transactions that read and/or modify data in a Cloud Spanner database.
1142Sessions are meant to be reused for many consecutive
1143transactions.
1144
1145Sessions can only execute one transaction at a time. To execute
1146multiple concurrent read-write/write-only transactions, create
1147multiple sessions. Note that standalone reads and queries use a
1148transaction internally, and count toward the one transaction
1149limit.
1150
1151Cloud Spanner limits the number of sessions that can exist at any given
1152time; thus, it is a good idea to delete idle and/or unneeded sessions.
1153Aside from explicit deletes, Cloud Spanner can delete sessions for which no
1154operations are sent for more than an hour. If a session is deleted,
1155requests to it return `NOT_FOUND`.
1156
1157Idle sessions can be kept alive by sending a trivial SQL query
1158periodically, e.g., `"SELECT 1"`.
1159
1160Args:
1161  database: string, Required. The database in which the new session is created. (required)
1162  body: object, The request body. (required)
1163    The object takes the form of:
1164
1165{ # The request for CreateSession.
1166    "session": { # A session in the Cloud Spanner API. # The session to create.
1167      "labels": { # The labels for the session.
1168          #
1169          #  * Label keys must be between 1 and 63 characters long and must conform to
1170          #    the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
1171          #  * Label values must be between 0 and 63 characters long and must conform
1172          #    to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
1173          #  * No more than 64 labels can be associated with a given session.
1174          #
1175          # See https://goo.gl/xmQnxf for more information on and examples of labels.
1176        "a_key": "A String",
1177      },
1178      "name": "A String", # The name of the session. This is always system-assigned; values provided
1179          # when creating a session are ignored.
1180      "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
1181          # typically earlier than the actual last use time.
1182      "createTime": "A String", # Output only. The timestamp when the session is created.
1183    },
1184  }
1185
1186  x__xgafv: string, V1 error format.
1187    Allowed values
1188      1 - v1 error format
1189      2 - v2 error format
1190
1191Returns:
1192  An object of the form:
1193
1194    { # A session in the Cloud Spanner API.
1195    "labels": { # The labels for the session.
1196        #
1197        #  * Label keys must be between 1 and 63 characters long and must conform to
1198        #    the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
1199        #  * Label values must be between 0 and 63 characters long and must conform
1200        #    to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
1201        #  * No more than 64 labels can be associated with a given session.
1202        #
1203        # See https://goo.gl/xmQnxf for more information on and examples of labels.
1204      "a_key": "A String",
1205    },
1206    "name": "A String", # The name of the session. This is always system-assigned; values provided
1207        # when creating a session are ignored.
1208    "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
1209        # typically earlier than the actual last use time.
1210    "createTime": "A String", # Output only. The timestamp when the session is created.
1211  }</pre>
1212</div>
1213
1214<div class="method">
1215    <code class="details" id="delete">delete(name, x__xgafv=None)</code>
1216  <pre>Ends a session, releasing server resources associated with it. This will
1217asynchronously trigger cancellation of any operations that are running with
1218this session.
1219
1220Args:
1221  name: string, Required. The name of the session to delete. (required)
1222  x__xgafv: string, V1 error format.
1223    Allowed values
1224      1 - v1 error format
1225      2 - v2 error format
1226
1227Returns:
1228  An object of the form:
1229
1230    { # A generic empty message that you can re-use to avoid defining duplicated
1231      # empty messages in your APIs. A typical example is to use it as the request
1232      # or the response type of an API method. For instance:
1233      #
1234      #     service Foo {
1235      #       rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
1236      #     }
1237      #
1238      # The JSON representation for `Empty` is empty JSON object `{}`.
1239  }</pre>
1240</div>
1241
1242<div class="method">
1243    <code class="details" id="executeBatchDml">executeBatchDml(session, body, x__xgafv=None)</code>
1244  <pre>Executes a batch of SQL DML statements. This method allows many statements
1245to be run with lower latency than submitting them sequentially with
1246ExecuteSql.
1247
1248Statements are executed in order, sequentially.
1249ExecuteBatchDmlResponse will contain a
1250ResultSet for each DML statement that has successfully executed. If a
1251statement fails, its error status will be returned as part of the
1252ExecuteBatchDmlResponse. Execution will
1253stop at the first failed statement; the remaining statements will not run.
1254
1255ExecuteBatchDml is expected to return an OK status with a response even if
1256there was an error while processing one of the DML statements. Clients must
1257inspect response.status to determine if there were any errors while
1258processing the request.
1259
1260See more details in
1261ExecuteBatchDmlRequest and
1262ExecuteBatchDmlResponse.
1263
1264Args:
1265  session: string, Required. The session in which the DML statements should be performed. (required)
1266  body: object, The request body. (required)
1267    The object takes the form of:
1268
1269{ # The request for ExecuteBatchDml
1270    "seqno": "A String", # A per-transaction sequence number used to identify this request. This is
1271        # used in the same space as the seqno in
1272        # ExecuteSqlRequest. See more details
1273        # in ExecuteSqlRequest.
1274    "transaction": { # This message is used to select the transaction in which a # The transaction to use. A ReadWrite transaction is required. Single-use
1275        # transactions are not supported (to avoid replay).  The caller must either
1276        # supply an existing transaction ID or begin a new transaction.
1277        # Read or
1278        # ExecuteSql call runs.
1279        #
1280        # See TransactionOptions for more information about transactions.
1281      "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
1282          # it. The transaction ID of the new transaction is returned in
1283          # ResultSetMetadata.transaction, which is a Transaction.
1284          #
1285          #
1286          # Each session can have at most one active transaction at a time. After the
1287          # active transaction is completed, the session can immediately be
1288          # re-used for the next transaction. It is not necessary to create a
1289          # new session for each transaction.
1290          #
1291          # # Transaction Modes
1292          #
1293          # Cloud Spanner supports three transaction modes:
1294          #
1295          #   1. Locking read-write. This type of transaction is the only way
1296          #      to write data into Cloud Spanner. These transactions rely on
1297          #      pessimistic locking and, if necessary, two-phase commit.
1298          #      Locking read-write transactions may abort, requiring the
1299          #      application to retry.
1300          #
1301          #   2. Snapshot read-only. This transaction type provides guaranteed
1302          #      consistency across several reads, but does not allow
1303          #      writes. Snapshot read-only transactions can be configured to
1304          #      read at timestamps in the past. Snapshot read-only
1305          #      transactions do not need to be committed.
1306          #
1307          #   3. Partitioned DML. This type of transaction is used to execute
1308          #      a single Partitioned DML statement. Partitioned DML partitions
1309          #      the key space and runs the DML statement over each partition
1310          #      in parallel using separate, internal transactions that commit
1311          #      independently. Partitioned DML transactions do not need to be
1312          #      committed.
1313          #
1314          # For transactions that only read, snapshot read-only transactions
1315          # provide simpler semantics and are almost always faster. In
1316          # particular, read-only transactions do not take locks, so they do
1317          # not conflict with read-write transactions. As a consequence of not
1318          # taking locks, they also do not abort, so retry loops are not needed.
1319          #
1320          # Transactions may only read/write data in a single database. They
1321          # may, however, read/write data in different tables within that
1322          # database.
1323          #
1324          # ## Locking Read-Write Transactions
1325          #
1326          # Locking transactions may be used to atomically read-modify-write
1327          # data anywhere in a database. This type of transaction is externally
1328          # consistent.
1329          #
1330          # Clients should attempt to minimize the amount of time a transaction
1331          # is active. Faster transactions commit with higher probability
1332          # and cause less contention. Cloud Spanner attempts to keep read locks
1333          # active as long as the transaction continues to do reads, and the
1334          # transaction has not been terminated by
1335          # Commit or
1336          # Rollback.  Long periods of
1337          # inactivity at the client may cause Cloud Spanner to release a
1338          # transaction's locks and abort it.
1339          #
1340          # Conceptually, a read-write transaction consists of zero or more
1341          # reads or SQL statements followed by
1342          # Commit. At any time before
1343          # Commit, the client can send a
1344          # Rollback request to abort the
1345          # transaction.
1346          #
1347          # ### Semantics
1348          #
1349          # Cloud Spanner can commit the transaction if all read locks it acquired
1350          # are still valid at commit time, and it is able to acquire write
1351          # locks for all writes. Cloud Spanner can abort the transaction for any
1352          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
1353          # that the transaction has not modified any user data in Cloud Spanner.
1354          #
1355          # Unless the transaction commits, Cloud Spanner makes no guarantees about
1356          # how long the transaction's locks were held for. It is an error to
1357          # use Cloud Spanner locks for any sort of mutual exclusion other than
1358          # between Cloud Spanner transactions themselves.
1359          #
1360          # ### Retrying Aborted Transactions
1361          #
1362          # When a transaction aborts, the application can choose to retry the
1363          # whole transaction again. To maximize the chances of successfully
1364          # committing the retry, the client should execute the retry in the
1365          # same session as the original attempt. The original session's lock
1366          # priority increases with each consecutive abort, meaning that each
1367          # attempt has a slightly better chance of success than the previous.
1368          #
1369          # Under some circumstances (e.g., many transactions attempting to
1370          # modify the same row(s)), a transaction can abort many times in a
1371          # short period before successfully committing. Thus, it is not a good
1372          # idea to cap the number of retries a transaction can attempt;
1373          # instead, it is better to limit the total amount of wall time spent
1374          # retrying.
1375          #
1376          # ### Idle Transactions
1377          #
1378          # A transaction is considered idle if it has no outstanding reads or
1379          # SQL queries and has not started a read or SQL query within the last 10
1380          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
1381          # don't hold on to locks indefinitely. In that case, the commit will
1382          # fail with error `ABORTED`.
1383          #
1384          # If this behavior is undesirable, periodically executing a simple
1385          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
1386          # transaction from becoming idle.
1387          #
1388          # ## Snapshot Read-Only Transactions
1389          #
1390          # Snapshot read-only transactions provides a simpler method than
1391          # locking read-write transactions for doing several consistent
1392          # reads. However, this type of transaction does not support writes.
1393          #
1394          # Snapshot transactions do not take locks. Instead, they work by
1395          # choosing a Cloud Spanner timestamp, then executing all reads at that
1396          # timestamp. Since they do not acquire locks, they do not block
1397          # concurrent read-write transactions.
1398          #
1399          # Unlike locking read-write transactions, snapshot read-only
1400          # transactions never abort. They can fail if the chosen read
1401          # timestamp is garbage collected; however, the default garbage
1402          # collection policy is generous enough that most applications do not
1403          # need to worry about this in practice.
1404          #
1405          # Snapshot read-only transactions do not need to call
1406          # Commit or
1407          # Rollback (and in fact are not
1408          # permitted to do so).
1409          #
1410          # To execute a snapshot transaction, the client specifies a timestamp
1411          # bound, which tells Cloud Spanner how to choose a read timestamp.
1412          #
1413          # The types of timestamp bound are:
1414          #
1415          #   - Strong (the default).
1416          #   - Bounded staleness.
1417          #   - Exact staleness.
1418          #
1419          # If the Cloud Spanner database to be read is geographically distributed,
1420          # stale read-only transactions can execute more quickly than strong
1421          # or read-write transaction, because they are able to execute far
1422          # from the leader replica.
1423          #
1424          # Each type of timestamp bound is discussed in detail below.
1425          #
1426          # ### Strong
1427          #
1428          # Strong reads are guaranteed to see the effects of all transactions
1429          # that have committed before the start of the read. Furthermore, all
1430          # rows yielded by a single read are consistent with each other -- if
1431          # any part of the read observes a transaction, all parts of the read
1432          # see the transaction.
1433          #
1434          # Strong reads are not repeatable: two consecutive strong read-only
1435          # transactions might return inconsistent results if there are
1436          # concurrent writes. If consistency across reads is required, the
1437          # reads should be executed within a transaction or at an exact read
1438          # timestamp.
1439          #
1440          # See TransactionOptions.ReadOnly.strong.
1441          #
1442          # ### Exact Staleness
1443          #
1444          # These timestamp bounds execute reads at a user-specified
1445          # timestamp. Reads at a timestamp are guaranteed to see a consistent
1446          # prefix of the global transaction history: they observe
1447          # modifications done by all transactions with a commit timestamp <=
1448          # the read timestamp, and observe none of the modifications done by
1449          # transactions with a larger commit timestamp. They will block until
1450          # all conflicting transactions that may be assigned commit timestamps
1451          # <= the read timestamp have finished.
1452          #
1453          # The timestamp can either be expressed as an absolute Cloud Spanner commit
1454          # timestamp or a staleness relative to the current time.
1455          #
1456          # These modes do not require a "negotiation phase" to pick a
1457          # timestamp. As a result, they execute slightly faster than the
1458          # equivalent boundedly stale concurrency modes. On the other hand,
1459          # boundedly stale reads usually return fresher results.
1460          #
1461          # See TransactionOptions.ReadOnly.read_timestamp and
1462          # TransactionOptions.ReadOnly.exact_staleness.
1463          #
1464          # ### Bounded Staleness
1465          #
1466          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
1467          # subject to a user-provided staleness bound. Cloud Spanner chooses the
1468          # newest timestamp within the staleness bound that allows execution
1469          # of the reads at the closest available replica without blocking.
1470          #
1471          # All rows yielded are consistent with each other -- if any part of
1472          # the read observes a transaction, all parts of the read see the
1473          # transaction. Boundedly stale reads are not repeatable: two stale
1474          # reads, even if they use the same staleness bound, can execute at
1475          # different timestamps and thus return inconsistent results.
1476          #
1477          # Boundedly stale reads execute in two phases: the first phase
1478          # negotiates a timestamp among all replicas needed to serve the
1479          # read. In the second phase, reads are executed at the negotiated
1480          # timestamp.
1481          #
1482          # As a result of the two phase execution, bounded staleness reads are
1483          # usually a little slower than comparable exact staleness
1484          # reads. However, they are typically able to return fresher
1485          # results, and are more likely to execute at the closest replica.
1486          #
1487          # Because the timestamp negotiation requires up-front knowledge of
1488          # which rows will be read, it can only be used with single-use
1489          # read-only transactions.
1490          #
1491          # See TransactionOptions.ReadOnly.max_staleness and
1492          # TransactionOptions.ReadOnly.min_read_timestamp.
1493          #
1494          # ### Old Read Timestamps and Garbage Collection
1495          #
1496          # Cloud Spanner continuously garbage collects deleted and overwritten data
1497          # in the background to reclaim storage space. This process is known
1498          # as "version GC". By default, version GC reclaims versions after they
1499          # are one hour old. Because of this, Cloud Spanner cannot perform reads
1500          # at read timestamps more than one hour in the past. This
1501          # restriction also applies to in-progress reads and/or SQL queries whose
1502          # timestamp become too old while executing. Reads and SQL queries with
1503          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
1504          #
1505          # ## Partitioned DML Transactions
1506          #
1507          # Partitioned DML transactions are used to execute DML statements with a
1508          # different execution strategy that provides different, and often better,
1509          # scalability properties for large, table-wide operations than DML in a
1510          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
1511          # should prefer using ReadWrite transactions.
1512          #
1513          # Partitioned DML partitions the keyspace and runs the DML statement on each
1514          # partition in separate, internal transactions. These transactions commit
1515          # automatically when complete, and run independently from one another.
1516          #
1517          # To reduce lock contention, this execution strategy only acquires read locks
1518          # on rows that match the WHERE clause of the statement. Additionally, the
1519          # smaller per-partition transactions hold locks for less time.
1520          #
1521          # That said, Partitioned DML is not a drop-in replacement for standard DML used
1522          # in ReadWrite transactions.
1523          #
1524          #  - The DML statement must be fully-partitionable. Specifically, the statement
1525          #    must be expressible as the union of many statements which each access only
1526          #    a single row of the table.
1527          #
1528          #  - The statement is not applied atomically to all rows of the table. Rather,
1529          #    the statement is applied atomically to partitions of the table, in
1530          #    independent transactions. Secondary index rows are updated atomically
1531          #    with the base table rows.
1532          #
1533          #  - Partitioned DML does not guarantee exactly-once execution semantics
1534          #    against a partition. The statement will be applied at least once to each
1535          #    partition. It is strongly recommended that the DML statement should be
1536          #    idempotent to avoid unexpected results. For instance, it is potentially
1537          #    dangerous to run a statement such as
1538          #    `UPDATE table SET column = column + 1` as it could be run multiple times
1539          #    against some rows.
1540          #
1541          #  - The partitions are committed automatically - there is no support for
1542          #    Commit or Rollback. If the call returns an error, or if the client issuing
1543          #    the ExecuteSql call dies, it is possible that some rows had the statement
1544          #    executed on them successfully. It is also possible that statement was
1545          #    never executed against other rows.
1546          #
1547          #  - Partitioned DML transactions may only contain the execution of a single
1548          #    DML statement via ExecuteSql or ExecuteStreamingSql.
1549          #
1550          #  - If any error is encountered during the execution of the partitioned DML
1551          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
1552          #    value that cannot be stored due to schema constraints), then the
1553          #    operation is stopped at that point and an error is returned. It is
1554          #    possible that at this point, some partitions have been committed (or even
1555          #    committed multiple times), and other partitions have not been run at all.
1556          #
1557          # Given the above, Partitioned DML is good fit for large, database-wide,
1558          # operations that are idempotent, such as deleting old rows from a very large
1559          # table.
1560        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
1561            #
1562            # Authorization to begin a read-write transaction requires
1563            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
1564            # on the `session` resource.
1565            # transaction type has no options.
1566        },
1567        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
1568            #
1569            # Authorization to begin a read-only transaction requires
1570            # `spanner.databases.beginReadOnlyTransaction` permission
1571            # on the `session` resource.
1572          "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
1573              #
1574              # This is useful for requesting fresher data than some previous
1575              # read, or data that is fresh enough to observe the effects of some
1576              # previously committed transaction whose timestamp is known.
1577              #
1578              # Note that this option can only be used in single-use transactions.
1579              #
1580              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
1581              # Example: `"2014-10-02T15:01:23.045123456Z"`.
1582          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
1583              # the Transaction message that describes the transaction.
1584          "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
1585              # seconds. Guarantees that all writes that have committed more
1586              # than the specified number of seconds ago are visible. Because
1587              # Cloud Spanner chooses the exact timestamp, this mode works even if
1588              # the client's local clock is substantially skewed from Cloud Spanner
1589              # commit timestamps.
1590              #
1591              # Useful for reading the freshest data available at a nearby
1592              # replica, while bounding the possible staleness if the local
1593              # replica has fallen behind.
1594              #
1595              # Note that this option can only be used in single-use
1596              # transactions.
1597          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
1598              # old. The timestamp is chosen soon after the read is started.
1599              #
1600              # Guarantees that all writes that have committed more than the
1601              # specified number of seconds ago are visible. Because Cloud Spanner
1602              # chooses the exact timestamp, this mode works even if the client's
1603              # local clock is substantially skewed from Cloud Spanner commit
1604              # timestamps.
1605              #
1606              # Useful for reading at nearby replicas without the distributed
1607              # timestamp negotiation overhead of `max_staleness`.
1608          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
1609              # reads at a specific timestamp are repeatable; the same read at
1610              # the same timestamp always returns the same data. If the
1611              # timestamp is in the future, the read will block until the
1612              # specified timestamp, modulo the read's deadline.
1613              #
1614              # Useful for large scale consistent reads such as mapreduces, or
1615              # for coordinating many reads against a consistent snapshot of the
1616              # data.
1617              #
1618              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
1619              # Example: `"2014-10-02T15:01:23.045123456Z"`.
1620          "strong": True or False, # Read at a timestamp where all previously committed transactions
1621              # are visible.
1622        },
1623        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
1624            #
1625            # Authorization to begin a Partitioned DML transaction requires
1626            # `spanner.databases.beginPartitionedDmlTransaction` permission
1627            # on the `session` resource.
1628        },
1629      },
1630      "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
1631          # This is the most efficient way to execute a transaction that
1632          # consists of a single SQL query.
1633          #
1634          #
1635          # Each session can have at most one active transaction at a time. After the
1636          # active transaction is completed, the session can immediately be
1637          # re-used for the next transaction. It is not necessary to create a
1638          # new session for each transaction.
1639          #
1640          # # Transaction Modes
1641          #
1642          # Cloud Spanner supports three transaction modes:
1643          #
1644          #   1. Locking read-write. This type of transaction is the only way
1645          #      to write data into Cloud Spanner. These transactions rely on
1646          #      pessimistic locking and, if necessary, two-phase commit.
1647          #      Locking read-write transactions may abort, requiring the
1648          #      application to retry.
1649          #
1650          #   2. Snapshot read-only. This transaction type provides guaranteed
1651          #      consistency across several reads, but does not allow
1652          #      writes. Snapshot read-only transactions can be configured to
1653          #      read at timestamps in the past. Snapshot read-only
1654          #      transactions do not need to be committed.
1655          #
1656          #   3. Partitioned DML. This type of transaction is used to execute
1657          #      a single Partitioned DML statement. Partitioned DML partitions
1658          #      the key space and runs the DML statement over each partition
1659          #      in parallel using separate, internal transactions that commit
1660          #      independently. Partitioned DML transactions do not need to be
1661          #      committed.
1662          #
1663          # For transactions that only read, snapshot read-only transactions
1664          # provide simpler semantics and are almost always faster. In
1665          # particular, read-only transactions do not take locks, so they do
1666          # not conflict with read-write transactions. As a consequence of not
1667          # taking locks, they also do not abort, so retry loops are not needed.
1668          #
1669          # Transactions may only read/write data in a single database. They
1670          # may, however, read/write data in different tables within that
1671          # database.
1672          #
1673          # ## Locking Read-Write Transactions
1674          #
1675          # Locking transactions may be used to atomically read-modify-write
1676          # data anywhere in a database. This type of transaction is externally
1677          # consistent.
1678          #
1679          # Clients should attempt to minimize the amount of time a transaction
1680          # is active. Faster transactions commit with higher probability
1681          # and cause less contention. Cloud Spanner attempts to keep read locks
1682          # active as long as the transaction continues to do reads, and the
1683          # transaction has not been terminated by
1684          # Commit or
1685          # Rollback.  Long periods of
1686          # inactivity at the client may cause Cloud Spanner to release a
1687          # transaction's locks and abort it.
1688          #
1689          # Conceptually, a read-write transaction consists of zero or more
1690          # reads or SQL statements followed by
1691          # Commit. At any time before
1692          # Commit, the client can send a
1693          # Rollback request to abort the
1694          # transaction.
1695          #
1696          # ### Semantics
1697          #
1698          # Cloud Spanner can commit the transaction if all read locks it acquired
1699          # are still valid at commit time, and it is able to acquire write
1700          # locks for all writes. Cloud Spanner can abort the transaction for any
1701          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
1702          # that the transaction has not modified any user data in Cloud Spanner.
1703          #
1704          # Unless the transaction commits, Cloud Spanner makes no guarantees about
1705          # how long the transaction's locks were held for. It is an error to
1706          # use Cloud Spanner locks for any sort of mutual exclusion other than
1707          # between Cloud Spanner transactions themselves.
1708          #
1709          # ### Retrying Aborted Transactions
1710          #
1711          # When a transaction aborts, the application can choose to retry the
1712          # whole transaction again. To maximize the chances of successfully
1713          # committing the retry, the client should execute the retry in the
1714          # same session as the original attempt. The original session's lock
1715          # priority increases with each consecutive abort, meaning that each
1716          # attempt has a slightly better chance of success than the previous.
1717          #
1718          # Under some circumstances (e.g., many transactions attempting to
1719          # modify the same row(s)), a transaction can abort many times in a
1720          # short period before successfully committing. Thus, it is not a good
1721          # idea to cap the number of retries a transaction can attempt;
1722          # instead, it is better to limit the total amount of wall time spent
1723          # retrying.
1724          #
1725          # ### Idle Transactions
1726          #
1727          # A transaction is considered idle if it has no outstanding reads or
1728          # SQL queries and has not started a read or SQL query within the last 10
1729          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
1730          # don't hold on to locks indefinitely. In that case, the commit will
1731          # fail with error `ABORTED`.
1732          #
1733          # If this behavior is undesirable, periodically executing a simple
1734          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
1735          # transaction from becoming idle.
1736          #
1737          # ## Snapshot Read-Only Transactions
1738          #
1739          # Snapshot read-only transactions provides a simpler method than
1740          # locking read-write transactions for doing several consistent
1741          # reads. However, this type of transaction does not support writes.
1742          #
1743          # Snapshot transactions do not take locks. Instead, they work by
1744          # choosing a Cloud Spanner timestamp, then executing all reads at that
1745          # timestamp. Since they do not acquire locks, they do not block
1746          # concurrent read-write transactions.
1747          #
1748          # Unlike locking read-write transactions, snapshot read-only
1749          # transactions never abort. They can fail if the chosen read
1750          # timestamp is garbage collected; however, the default garbage
1751          # collection policy is generous enough that most applications do not
1752          # need to worry about this in practice.
1753          #
1754          # Snapshot read-only transactions do not need to call
1755          # Commit or
1756          # Rollback (and in fact are not
1757          # permitted to do so).
1758          #
1759          # To execute a snapshot transaction, the client specifies a timestamp
1760          # bound, which tells Cloud Spanner how to choose a read timestamp.
1761          #
1762          # The types of timestamp bound are:
1763          #
1764          #   - Strong (the default).
1765          #   - Bounded staleness.
1766          #   - Exact staleness.
1767          #
1768          # If the Cloud Spanner database to be read is geographically distributed,
1769          # stale read-only transactions can execute more quickly than strong
1770          # or read-write transaction, because they are able to execute far
1771          # from the leader replica.
1772          #
1773          # Each type of timestamp bound is discussed in detail below.
1774          #
1775          # ### Strong
1776          #
1777          # Strong reads are guaranteed to see the effects of all transactions
1778          # that have committed before the start of the read. Furthermore, all
1779          # rows yielded by a single read are consistent with each other -- if
1780          # any part of the read observes a transaction, all parts of the read
1781          # see the transaction.
1782          #
1783          # Strong reads are not repeatable: two consecutive strong read-only
1784          # transactions might return inconsistent results if there are
1785          # concurrent writes. If consistency across reads is required, the
1786          # reads should be executed within a transaction or at an exact read
1787          # timestamp.
1788          #
1789          # See TransactionOptions.ReadOnly.strong.
1790          #
1791          # ### Exact Staleness
1792          #
1793          # These timestamp bounds execute reads at a user-specified
1794          # timestamp. Reads at a timestamp are guaranteed to see a consistent
1795          # prefix of the global transaction history: they observe
1796          # modifications done by all transactions with a commit timestamp <=
1797          # the read timestamp, and observe none of the modifications done by
1798          # transactions with a larger commit timestamp. They will block until
1799          # all conflicting transactions that may be assigned commit timestamps
1800          # <= the read timestamp have finished.
1801          #
1802          # The timestamp can either be expressed as an absolute Cloud Spanner commit
1803          # timestamp or a staleness relative to the current time.
1804          #
1805          # These modes do not require a "negotiation phase" to pick a
1806          # timestamp. As a result, they execute slightly faster than the
1807          # equivalent boundedly stale concurrency modes. On the other hand,
1808          # boundedly stale reads usually return fresher results.
1809          #
1810          # See TransactionOptions.ReadOnly.read_timestamp and
1811          # TransactionOptions.ReadOnly.exact_staleness.
1812          #
1813          # ### Bounded Staleness
1814          #
1815          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
1816          # subject to a user-provided staleness bound. Cloud Spanner chooses the
1817          # newest timestamp within the staleness bound that allows execution
1818          # of the reads at the closest available replica without blocking.
1819          #
1820          # All rows yielded are consistent with each other -- if any part of
1821          # the read observes a transaction, all parts of the read see the
1822          # transaction. Boundedly stale reads are not repeatable: two stale
1823          # reads, even if they use the same staleness bound, can execute at
1824          # different timestamps and thus return inconsistent results.
1825          #
1826          # Boundedly stale reads execute in two phases: the first phase
1827          # negotiates a timestamp among all replicas needed to serve the
1828          # read. In the second phase, reads are executed at the negotiated
1829          # timestamp.
1830          #
1831          # As a result of the two phase execution, bounded staleness reads are
1832          # usually a little slower than comparable exact staleness
1833          # reads. However, they are typically able to return fresher
1834          # results, and are more likely to execute at the closest replica.
1835          #
1836          # Because the timestamp negotiation requires up-front knowledge of
1837          # which rows will be read, it can only be used with single-use
1838          # read-only transactions.
1839          #
1840          # See TransactionOptions.ReadOnly.max_staleness and
1841          # TransactionOptions.ReadOnly.min_read_timestamp.
1842          #
1843          # ### Old Read Timestamps and Garbage Collection
1844          #
1845          # Cloud Spanner continuously garbage collects deleted and overwritten data
1846          # in the background to reclaim storage space. This process is known
1847          # as "version GC". By default, version GC reclaims versions after they
1848          # are one hour old. Because of this, Cloud Spanner cannot perform reads
1849          # at read timestamps more than one hour in the past. This
1850          # restriction also applies to in-progress reads and/or SQL queries whose
1851          # timestamp become too old while executing. Reads and SQL queries with
1852          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
1853          #
1854          # ## Partitioned DML Transactions
1855          #
1856          # Partitioned DML transactions are used to execute DML statements with a
1857          # different execution strategy that provides different, and often better,
1858          # scalability properties for large, table-wide operations than DML in a
1859          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
1860          # should prefer using ReadWrite transactions.
1861          #
1862          # Partitioned DML partitions the keyspace and runs the DML statement on each
1863          # partition in separate, internal transactions. These transactions commit
1864          # automatically when complete, and run independently from one another.
1865          #
1866          # To reduce lock contention, this execution strategy only acquires read locks
1867          # on rows that match the WHERE clause of the statement. Additionally, the
1868          # smaller per-partition transactions hold locks for less time.
1869          #
1870          # That said, Partitioned DML is not a drop-in replacement for standard DML used
1871          # in ReadWrite transactions.
1872          #
1873          #  - The DML statement must be fully-partitionable. Specifically, the statement
1874          #    must be expressible as the union of many statements which each access only
1875          #    a single row of the table.
1876          #
1877          #  - The statement is not applied atomically to all rows of the table. Rather,
1878          #    the statement is applied atomically to partitions of the table, in
1879          #    independent transactions. Secondary index rows are updated atomically
1880          #    with the base table rows.
1881          #
1882          #  - Partitioned DML does not guarantee exactly-once execution semantics
1883          #    against a partition. The statement will be applied at least once to each
1884          #    partition. It is strongly recommended that the DML statement should be
1885          #    idempotent to avoid unexpected results. For instance, it is potentially
1886          #    dangerous to run a statement such as
1887          #    `UPDATE table SET column = column + 1` as it could be run multiple times
1888          #    against some rows.
1889          #
1890          #  - The partitions are committed automatically - there is no support for
1891          #    Commit or Rollback. If the call returns an error, or if the client issuing
1892          #    the ExecuteSql call dies, it is possible that some rows had the statement
1893          #    executed on them successfully. It is also possible that statement was
1894          #    never executed against other rows.
1895          #
1896          #  - Partitioned DML transactions may only contain the execution of a single
1897          #    DML statement via ExecuteSql or ExecuteStreamingSql.
1898          #
1899          #  - If any error is encountered during the execution of the partitioned DML
1900          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
1901          #    value that cannot be stored due to schema constraints), then the
1902          #    operation is stopped at that point and an error is returned. It is
1903          #    possible that at this point, some partitions have been committed (or even
1904          #    committed multiple times), and other partitions have not been run at all.
1905          #
1906          # Given the above, Partitioned DML is good fit for large, database-wide,
1907          # operations that are idempotent, such as deleting old rows from a very large
1908          # table.
1909        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
1910            #
1911            # Authorization to begin a read-write transaction requires
1912            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
1913            # on the `session` resource.
1914            # transaction type has no options.
1915        },
1916        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
1917            #
1918            # Authorization to begin a read-only transaction requires
1919            # `spanner.databases.beginReadOnlyTransaction` permission
1920            # on the `session` resource.
1921          "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
1922              #
1923              # This is useful for requesting fresher data than some previous
1924              # read, or data that is fresh enough to observe the effects of some
1925              # previously committed transaction whose timestamp is known.
1926              #
1927              # Note that this option can only be used in single-use transactions.
1928              #
1929              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
1930              # Example: `"2014-10-02T15:01:23.045123456Z"`.
1931          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
1932              # the Transaction message that describes the transaction.
1933          "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
1934              # seconds. Guarantees that all writes that have committed more
1935              # than the specified number of seconds ago are visible. Because
1936              # Cloud Spanner chooses the exact timestamp, this mode works even if
1937              # the client's local clock is substantially skewed from Cloud Spanner
1938              # commit timestamps.
1939              #
1940              # Useful for reading the freshest data available at a nearby
1941              # replica, while bounding the possible staleness if the local
1942              # replica has fallen behind.
1943              #
1944              # Note that this option can only be used in single-use
1945              # transactions.
1946          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
1947              # old. The timestamp is chosen soon after the read is started.
1948              #
1949              # Guarantees that all writes that have committed more than the
1950              # specified number of seconds ago are visible. Because Cloud Spanner
1951              # chooses the exact timestamp, this mode works even if the client's
1952              # local clock is substantially skewed from Cloud Spanner commit
1953              # timestamps.
1954              #
1955              # Useful for reading at nearby replicas without the distributed
1956              # timestamp negotiation overhead of `max_staleness`.
1957          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
1958              # reads at a specific timestamp are repeatable; the same read at
1959              # the same timestamp always returns the same data. If the
1960              # timestamp is in the future, the read will block until the
1961              # specified timestamp, modulo the read's deadline.
1962              #
1963              # Useful for large scale consistent reads such as mapreduces, or
1964              # for coordinating many reads against a consistent snapshot of the
1965              # data.
1966              #
1967              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
1968              # Example: `"2014-10-02T15:01:23.045123456Z"`.
1969          "strong": True or False, # Read at a timestamp where all previously committed transactions
1970              # are visible.
1971        },
1972        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
1973            #
1974            # Authorization to begin a Partitioned DML transaction requires
1975            # `spanner.databases.beginPartitionedDmlTransaction` permission
1976            # on the `session` resource.
1977        },
1978      },
1979      "id": "A String", # Execute the read or SQL query in a previously-started transaction.
1980    },
1981    "statements": [ # The list of statements to execute in this batch. Statements are executed
1982        # serially, such that the effects of statement i are visible to statement
1983        # i+1. Each statement must be a DML statement. Execution will stop at the
1984        # first failed statement; the remaining statements will not run.
1985        #
1986        # REQUIRES: `statements_size()` > 0.
1987      { # A single DML statement.
1988        "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
1989            # from a JSON value.  For example, values of type `BYTES` and values
1990            # of type `STRING` both appear in params as JSON strings.
1991            #
1992            # In these cases, `param_types` can be used to specify the exact
1993            # SQL type for some or all of the SQL statement parameters. See the
1994            # definition of Type for more information
1995            # about SQL types.
1996          "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
1997              # table cell or returned from an SQL query.
1998            "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
1999                # provides type information for the struct's fields.
2000            "code": "A String", # Required. The TypeCode for this type.
2001            "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
2002                # is the type of the array elements.
2003          },
2004        },
2005        "params": { # The DML string can contain parameter placeholders. A parameter
2006            # placeholder consists of `'@'` followed by the parameter
2007            # name. Parameter names consist of any combination of letters,
2008            # numbers, and underscores.
2009            #
2010            # Parameters can appear anywhere that a literal value is expected.  The
2011            # same parameter name can be used more than once, for example:
2012            #   `"WHERE id > @msg_id AND id < @msg_id + 100"`
2013            #
2014            # It is an error to execute an SQL statement with unbound parameters.
2015            #
2016            # Parameter values are specified using `params`, which is a JSON
2017            # object whose keys are parameter names, and whose values are the
2018            # corresponding parameter values.
2019          "a_key": "", # Properties of the object.
2020        },
2021        "sql": "A String", # Required. The DML string.
2022      },
2023    ],
2024  }
2025
2026  x__xgafv: string, V1 error format.
2027    Allowed values
2028      1 - v1 error format
2029      2 - v2 error format
2030
2031Returns:
2032  An object of the form:
2033
2034    { # The response for ExecuteBatchDml. Contains a list
2035      # of ResultSet, one for each DML statement that has successfully executed.
2036      # If a statement fails, the error is returned as part of the response payload.
2037      # Clients can determine whether all DML statements have run successfully, or if
2038      # a statement failed, using one of the following approaches:
2039      #
2040      #   1. Check if `'status'` field is `OkStatus`.
2041      #   2. Check if `result_sets_size()` equals the number of statements in
2042      #      ExecuteBatchDmlRequest.
2043      #
2044      # Example 1: A request with 5 DML statements, all executed successfully.
2045      #
2046      # Result: A response with 5 ResultSets, one for each statement in the same
2047      # order, and an `OkStatus`.
2048      #
2049      # Example 2: A request with 5 DML statements. The 3rd statement has a syntax
2050      # error.
2051      #
2052      # Result: A response with 2 ResultSets, for the first 2 statements that
2053      # run successfully, and a syntax error (`INVALID_ARGUMENT`) status. From
2054      # `result_set_size()` client can determine that the 3rd statement has failed.
2055    "status": { # The `Status` type defines a logical error model that is suitable for # If all DML statements are executed successfully, status will be OK.
2056        # Otherwise, the error status of the first failed statement.
2057        # different programming environments, including REST APIs and RPC APIs. It is
2058        # used by [gRPC](https://github.com/grpc). The error model is designed to be:
2059        #
2060        # - Simple to use and understand for most users
2061        # - Flexible enough to meet unexpected needs
2062        #
2063        # # Overview
2064        #
2065        # The `Status` message contains three pieces of data: error code, error
2066        # message, and error details. The error code should be an enum value of
2067        # google.rpc.Code, but it may accept additional error codes if needed.  The
2068        # error message should be a developer-facing English message that helps
2069        # developers *understand* and *resolve* the error. If a localized user-facing
2070        # error message is needed, put the localized message in the error details or
2071        # localize it in the client. The optional error details may contain arbitrary
2072        # information about the error. There is a predefined set of error detail types
2073        # in the package `google.rpc` that can be used for common error conditions.
2074        #
2075        # # Language mapping
2076        #
2077        # The `Status` message is the logical representation of the error model, but it
2078        # is not necessarily the actual wire format. When the `Status` message is
2079        # exposed in different client libraries and different wire protocols, it can be
2080        # mapped differently. For example, it will likely be mapped to some exceptions
2081        # in Java, but more likely mapped to some error codes in C.
2082        #
2083        # # Other uses
2084        #
2085        # The error model and the `Status` message can be used in a variety of
2086        # environments, either with or without APIs, to provide a
2087        # consistent developer experience across different environments.
2088        #
2089        # Example uses of this error model include:
2090        #
2091        # - Partial errors. If a service needs to return partial errors to the client,
2092        #     it may embed the `Status` in the normal response to indicate the partial
2093        #     errors.
2094        #
2095        # - Workflow errors. A typical workflow has multiple steps. Each step may
2096        #     have a `Status` message for error reporting.
2097        #
2098        # - Batch operations. If a client uses batch request and batch response, the
2099        #     `Status` message should be used directly inside batch response, one for
2100        #     each error sub-response.
2101        #
2102        # - Asynchronous operations. If an API call embeds asynchronous operation
2103        #     results in its response, the status of those operations should be
2104        #     represented directly using the `Status` message.
2105        #
2106        # - Logging. If some API errors are stored in logs, the message `Status` could
2107        #     be used directly after any stripping needed for security/privacy reasons.
2108      "message": "A String", # A developer-facing error message, which should be in English. Any
2109          # user-facing error message should be localized and sent in the
2110          # google.rpc.Status.details field, or localized by the client.
2111      "code": 42, # The status code, which should be an enum value of google.rpc.Code.
2112      "details": [ # A list of messages that carry the error details.  There is a common set of
2113          # message types for APIs to use.
2114        {
2115          "a_key": "", # Properties of the object. Contains field @type with type URL.
2116        },
2117      ],
2118    },
2119    "resultSets": [ # ResultSets, one for each statement in the request that ran successfully, in
2120        # the same order as the statements in the request. Each ResultSet will
2121        # not contain any rows. The ResultSetStats in each ResultSet will
2122        # contain the number of rows modified by the statement.
2123        #
2124        # Only the first ResultSet in the response contains a valid
2125        # ResultSetMetadata.
2126      { # Results from Read or
2127          # ExecuteSql.
2128        "rows": [ # Each element in `rows` is a row whose format is defined by
2129            # metadata.row_type. The ith element
2130            # in each row matches the ith field in
2131            # metadata.row_type. Elements are
2132            # encoded based on type as described
2133            # here.
2134          [
2135            "",
2136          ],
2137        ],
2138        "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
2139            # produced this result set. These can be requested by setting
2140            # ExecuteSqlRequest.query_mode.
2141            # DML statements always produce stats containing the number of rows
2142            # modified, unless executed using the
2143            # ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
2144            # Other fields may or may not be populated, based on the
2145            # ExecuteSqlRequest.query_mode.
2146          "rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
2147              # returns a lower bound of the rows modified.
2148          "rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
2149          "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
2150            "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
2151                # with the plan root. Each PlanNode's `id` corresponds to its index in
2152                # `plan_nodes`.
2153              { # Node information for nodes appearing in a QueryPlan.plan_nodes.
2154                "index": 42, # The `PlanNode`'s index in node list.
2155                "kind": "A String", # Used to determine the type of node. May be needed for visualizing
2156                    # different kinds of nodes differently. For example, If the node is a
2157                    # SCALAR node, it will have a condensed representation
2158                    # which can be used to directly embed a description of the node in its
2159                    # parent.
2160                "displayName": "A String", # The display name for the node.
2161                "executionStats": { # The execution statistics associated with the node, contained in a group of
2162                    # key-value pairs. Only present if the plan was returned as a result of a
2163                    # profile query. For example, number of executions, number of rows/time per
2164                    # execution etc.
2165                  "a_key": "", # Properties of the object.
2166                },
2167                "childLinks": [ # List of child node `index`es and their relationship to this parent.
2168                  { # Metadata associated with a parent-child relationship appearing in a
2169                      # PlanNode.
2170                    "variable": "A String", # Only present if the child node is SCALAR and corresponds
2171                        # to an output variable of the parent node. The field carries the name of
2172                        # the output variable.
2173                        # For example, a `TableScan` operator that reads rows from a table will
2174                        # have child links to the `SCALAR` nodes representing the output variables
2175                        # created for each column that is read by the operator. The corresponding
2176                        # `variable` fields will be set to the variable names assigned to the
2177                        # columns.
2178                    "childIndex": 42, # The node to which the link points.
2179                    "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
2180                        # distinguish between the build child and the probe child, or in the case
2181                        # of the child being an output variable, to represent the tag associated
2182                        # with the output variable.
2183                  },
2184                ],
2185                "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
2186                    # `SCALAR` PlanNode(s).
2187                  "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
2188                      # where the `description` string of this node references a `SCALAR`
2189                      # subquery contained in the expression subtree rooted at this node. The
2190                      # referenced `SCALAR` subquery may not necessarily be a direct child of
2191                      # this node.
2192                    "a_key": 42,
2193                  },
2194                  "description": "A String", # A string representation of the expression subtree rooted at this node.
2195                },
2196                "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
2197                    # For example, a Parameter Reference node could have the following
2198                    # information in its metadata:
2199                    #
2200                    #     {
2201                    #       "parameter_reference": "param1",
2202                    #       "parameter_type": "array"
2203                    #     }
2204                  "a_key": "", # Properties of the object.
2205                },
2206              },
2207            ],
2208          },
2209          "queryStats": { # Aggregated statistics from the execution of the query. Only present when
2210              # the query is profiled. For example, a query could return the statistics as
2211              # follows:
2212              #
2213              #     {
2214              #       "rows_returned": "3",
2215              #       "elapsed_time": "1.22 secs",
2216              #       "cpu_time": "1.19 secs"
2217              #     }
2218            "a_key": "", # Properties of the object.
2219          },
2220        },
2221        "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
2222          "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
2223              # set.  For example, a SQL query like `"SELECT UserId, UserName FROM
2224              # Users"` could return a `row_type` value like:
2225              #
2226              #     "fields": [
2227              #       { "name": "UserId", "type": { "code": "INT64" } },
2228              #       { "name": "UserName", "type": { "code": "STRING" } },
2229              #     ]
2230            "fields": [ # The list of fields that make up this struct. Order is
2231                # significant, because values of this struct type are represented as
2232                # lists, where the order of field values matches the order of
2233                # fields in the StructType. In turn, the order of fields
2234                # matches the order of columns in a read request, or the order of
2235                # fields in the `SELECT` clause of a query.
2236              { # Message representing a single field of a struct.
2237                "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
2238                    # table cell or returned from an SQL query.
2239                  "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
2240                      # provides type information for the struct's fields.
2241                  "code": "A String", # Required. The TypeCode for this type.
2242                  "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
2243                      # is the type of the array elements.
2244                },
2245                "name": "A String", # The name of the field. For reads, this is the column name. For
2246                    # SQL queries, it is the column alias (e.g., `"Word"` in the
2247                    # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
2248                    # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
2249                    # columns might have an empty name (e.g., !"SELECT
2250                    # UPPER(ColName)"`). Note that a query result can contain
2251                    # multiple fields with the same name.
2252              },
2253            ],
2254          },
2255          "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
2256              # information about the new transaction is yielded here.
2257            "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
2258                # for the transaction. Not returned by default: see
2259                # TransactionOptions.ReadOnly.return_read_timestamp.
2260                #
2261                # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
2262                # Example: `"2014-10-02T15:01:23.045123456Z"`.
2263            "id": "A String", # `id` may be used to identify the transaction in subsequent
2264                # Read,
2265                # ExecuteSql,
2266                # Commit, or
2267                # Rollback calls.
2268                #
2269                # Single-use read-only transactions do not have IDs, because
2270                # single-use transactions do not support multiple requests.
2271          },
2272        },
2273      },
2274    ],
2275  }</pre>
2276</div>
2277
2278<div class="method">
2279    <code class="details" id="executeSql">executeSql(session, body, x__xgafv=None)</code>
2280  <pre>Executes an SQL statement, returning all results in a single reply. This
2281method cannot be used to return a result set larger than 10 MiB;
2282if the query yields more data than that, the query fails with
2283a `FAILED_PRECONDITION` error.
2284
2285Operations inside read-write transactions might return `ABORTED`. If
2286this occurs, the application should restart the transaction from
2287the beginning. See Transaction for more details.
2288
2289Larger result sets can be fetched in streaming fashion by calling
2290ExecuteStreamingSql instead.
2291
2292Args:
2293  session: string, Required. The session in which the SQL query should be performed. (required)
2294  body: object, The request body. (required)
2295    The object takes the form of:
2296
2297{ # The request for ExecuteSql and
2298      # ExecuteStreamingSql.
2299    "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
2300        # temporary read-only transaction with strong concurrency.
2301        #
2302        # The transaction to use.
2303        #
2304        # For queries, if none is provided, the default is a temporary read-only
2305        # transaction with strong concurrency.
2306        #
2307        # Standard DML statements require a ReadWrite transaction. Single-use
2308        # transactions are not supported (to avoid replay).  The caller must
2309        # either supply an existing transaction ID or begin a new transaction.
2310        #
2311        # Partitioned DML requires an existing PartitionedDml transaction ID.
2312        # Read or
2313        # ExecuteSql call runs.
2314        #
2315        # See TransactionOptions for more information about transactions.
2316      "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
2317          # it. The transaction ID of the new transaction is returned in
2318          # ResultSetMetadata.transaction, which is a Transaction.
2319          #
2320          #
2321          # Each session can have at most one active transaction at a time. After the
2322          # active transaction is completed, the session can immediately be
2323          # re-used for the next transaction. It is not necessary to create a
2324          # new session for each transaction.
2325          #
2326          # # Transaction Modes
2327          #
2328          # Cloud Spanner supports three transaction modes:
2329          #
2330          #   1. Locking read-write. This type of transaction is the only way
2331          #      to write data into Cloud Spanner. These transactions rely on
2332          #      pessimistic locking and, if necessary, two-phase commit.
2333          #      Locking read-write transactions may abort, requiring the
2334          #      application to retry.
2335          #
2336          #   2. Snapshot read-only. This transaction type provides guaranteed
2337          #      consistency across several reads, but does not allow
2338          #      writes. Snapshot read-only transactions can be configured to
2339          #      read at timestamps in the past. Snapshot read-only
2340          #      transactions do not need to be committed.
2341          #
2342          #   3. Partitioned DML. This type of transaction is used to execute
2343          #      a single Partitioned DML statement. Partitioned DML partitions
2344          #      the key space and runs the DML statement over each partition
2345          #      in parallel using separate, internal transactions that commit
2346          #      independently. Partitioned DML transactions do not need to be
2347          #      committed.
2348          #
2349          # For transactions that only read, snapshot read-only transactions
2350          # provide simpler semantics and are almost always faster. In
2351          # particular, read-only transactions do not take locks, so they do
2352          # not conflict with read-write transactions. As a consequence of not
2353          # taking locks, they also do not abort, so retry loops are not needed.
2354          #
2355          # Transactions may only read/write data in a single database. They
2356          # may, however, read/write data in different tables within that
2357          # database.
2358          #
2359          # ## Locking Read-Write Transactions
2360          #
2361          # Locking transactions may be used to atomically read-modify-write
2362          # data anywhere in a database. This type of transaction is externally
2363          # consistent.
2364          #
2365          # Clients should attempt to minimize the amount of time a transaction
2366          # is active. Faster transactions commit with higher probability
2367          # and cause less contention. Cloud Spanner attempts to keep read locks
2368          # active as long as the transaction continues to do reads, and the
2369          # transaction has not been terminated by
2370          # Commit or
2371          # Rollback.  Long periods of
2372          # inactivity at the client may cause Cloud Spanner to release a
2373          # transaction's locks and abort it.
2374          #
2375          # Conceptually, a read-write transaction consists of zero or more
2376          # reads or SQL statements followed by
2377          # Commit. At any time before
2378          # Commit, the client can send a
2379          # Rollback request to abort the
2380          # transaction.
2381          #
2382          # ### Semantics
2383          #
2384          # Cloud Spanner can commit the transaction if all read locks it acquired
2385          # are still valid at commit time, and it is able to acquire write
2386          # locks for all writes. Cloud Spanner can abort the transaction for any
2387          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
2388          # that the transaction has not modified any user data in Cloud Spanner.
2389          #
2390          # Unless the transaction commits, Cloud Spanner makes no guarantees about
2391          # how long the transaction's locks were held for. It is an error to
2392          # use Cloud Spanner locks for any sort of mutual exclusion other than
2393          # between Cloud Spanner transactions themselves.
2394          #
2395          # ### Retrying Aborted Transactions
2396          #
2397          # When a transaction aborts, the application can choose to retry the
2398          # whole transaction again. To maximize the chances of successfully
2399          # committing the retry, the client should execute the retry in the
2400          # same session as the original attempt. The original session's lock
2401          # priority increases with each consecutive abort, meaning that each
2402          # attempt has a slightly better chance of success than the previous.
2403          #
2404          # Under some circumstances (e.g., many transactions attempting to
2405          # modify the same row(s)), a transaction can abort many times in a
2406          # short period before successfully committing. Thus, it is not a good
2407          # idea to cap the number of retries a transaction can attempt;
2408          # instead, it is better to limit the total amount of wall time spent
2409          # retrying.
2410          #
2411          # ### Idle Transactions
2412          #
2413          # A transaction is considered idle if it has no outstanding reads or
2414          # SQL queries and has not started a read or SQL query within the last 10
2415          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
2416          # don't hold on to locks indefinitely. In that case, the commit will
2417          # fail with error `ABORTED`.
2418          #
2419          # If this behavior is undesirable, periodically executing a simple
2420          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
2421          # transaction from becoming idle.
2422          #
2423          # ## Snapshot Read-Only Transactions
2424          #
2425          # Snapshot read-only transactions provides a simpler method than
2426          # locking read-write transactions for doing several consistent
2427          # reads. However, this type of transaction does not support writes.
2428          #
2429          # Snapshot transactions do not take locks. Instead, they work by
2430          # choosing a Cloud Spanner timestamp, then executing all reads at that
2431          # timestamp. Since they do not acquire locks, they do not block
2432          # concurrent read-write transactions.
2433          #
2434          # Unlike locking read-write transactions, snapshot read-only
2435          # transactions never abort. They can fail if the chosen read
2436          # timestamp is garbage collected; however, the default garbage
2437          # collection policy is generous enough that most applications do not
2438          # need to worry about this in practice.
2439          #
2440          # Snapshot read-only transactions do not need to call
2441          # Commit or
2442          # Rollback (and in fact are not
2443          # permitted to do so).
2444          #
2445          # To execute a snapshot transaction, the client specifies a timestamp
2446          # bound, which tells Cloud Spanner how to choose a read timestamp.
2447          #
2448          # The types of timestamp bound are:
2449          #
2450          #   - Strong (the default).
2451          #   - Bounded staleness.
2452          #   - Exact staleness.
2453          #
2454          # If the Cloud Spanner database to be read is geographically distributed,
2455          # stale read-only transactions can execute more quickly than strong
2456          # or read-write transaction, because they are able to execute far
2457          # from the leader replica.
2458          #
2459          # Each type of timestamp bound is discussed in detail below.
2460          #
2461          # ### Strong
2462          #
2463          # Strong reads are guaranteed to see the effects of all transactions
2464          # that have committed before the start of the read. Furthermore, all
2465          # rows yielded by a single read are consistent with each other -- if
2466          # any part of the read observes a transaction, all parts of the read
2467          # see the transaction.
2468          #
2469          # Strong reads are not repeatable: two consecutive strong read-only
2470          # transactions might return inconsistent results if there are
2471          # concurrent writes. If consistency across reads is required, the
2472          # reads should be executed within a transaction or at an exact read
2473          # timestamp.
2474          #
2475          # See TransactionOptions.ReadOnly.strong.
2476          #
2477          # ### Exact Staleness
2478          #
2479          # These timestamp bounds execute reads at a user-specified
2480          # timestamp. Reads at a timestamp are guaranteed to see a consistent
2481          # prefix of the global transaction history: they observe
2482          # modifications done by all transactions with a commit timestamp <=
2483          # the read timestamp, and observe none of the modifications done by
2484          # transactions with a larger commit timestamp. They will block until
2485          # all conflicting transactions that may be assigned commit timestamps
2486          # <= the read timestamp have finished.
2487          #
2488          # The timestamp can either be expressed as an absolute Cloud Spanner commit
2489          # timestamp or a staleness relative to the current time.
2490          #
2491          # These modes do not require a "negotiation phase" to pick a
2492          # timestamp. As a result, they execute slightly faster than the
2493          # equivalent boundedly stale concurrency modes. On the other hand,
2494          # boundedly stale reads usually return fresher results.
2495          #
2496          # See TransactionOptions.ReadOnly.read_timestamp and
2497          # TransactionOptions.ReadOnly.exact_staleness.
2498          #
2499          # ### Bounded Staleness
2500          #
2501          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
2502          # subject to a user-provided staleness bound. Cloud Spanner chooses the
2503          # newest timestamp within the staleness bound that allows execution
2504          # of the reads at the closest available replica without blocking.
2505          #
2506          # All rows yielded are consistent with each other -- if any part of
2507          # the read observes a transaction, all parts of the read see the
2508          # transaction. Boundedly stale reads are not repeatable: two stale
2509          # reads, even if they use the same staleness bound, can execute at
2510          # different timestamps and thus return inconsistent results.
2511          #
2512          # Boundedly stale reads execute in two phases: the first phase
2513          # negotiates a timestamp among all replicas needed to serve the
2514          # read. In the second phase, reads are executed at the negotiated
2515          # timestamp.
2516          #
2517          # As a result of the two phase execution, bounded staleness reads are
2518          # usually a little slower than comparable exact staleness
2519          # reads. However, they are typically able to return fresher
2520          # results, and are more likely to execute at the closest replica.
2521          #
2522          # Because the timestamp negotiation requires up-front knowledge of
2523          # which rows will be read, it can only be used with single-use
2524          # read-only transactions.
2525          #
2526          # See TransactionOptions.ReadOnly.max_staleness and
2527          # TransactionOptions.ReadOnly.min_read_timestamp.
2528          #
2529          # ### Old Read Timestamps and Garbage Collection
2530          #
2531          # Cloud Spanner continuously garbage collects deleted and overwritten data
2532          # in the background to reclaim storage space. This process is known
2533          # as "version GC". By default, version GC reclaims versions after they
2534          # are one hour old. Because of this, Cloud Spanner cannot perform reads
2535          # at read timestamps more than one hour in the past. This
2536          # restriction also applies to in-progress reads and/or SQL queries whose
2537          # timestamp become too old while executing. Reads and SQL queries with
2538          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
2539          #
2540          # ## Partitioned DML Transactions
2541          #
2542          # Partitioned DML transactions are used to execute DML statements with a
2543          # different execution strategy that provides different, and often better,
2544          # scalability properties for large, table-wide operations than DML in a
2545          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
2546          # should prefer using ReadWrite transactions.
2547          #
2548          # Partitioned DML partitions the keyspace and runs the DML statement on each
2549          # partition in separate, internal transactions. These transactions commit
2550          # automatically when complete, and run independently from one another.
2551          #
2552          # To reduce lock contention, this execution strategy only acquires read locks
2553          # on rows that match the WHERE clause of the statement. Additionally, the
2554          # smaller per-partition transactions hold locks for less time.
2555          #
2556          # That said, Partitioned DML is not a drop-in replacement for standard DML used
2557          # in ReadWrite transactions.
2558          #
2559          #  - The DML statement must be fully-partitionable. Specifically, the statement
2560          #    must be expressible as the union of many statements which each access only
2561          #    a single row of the table.
2562          #
2563          #  - The statement is not applied atomically to all rows of the table. Rather,
2564          #    the statement is applied atomically to partitions of the table, in
2565          #    independent transactions. Secondary index rows are updated atomically
2566          #    with the base table rows.
2567          #
2568          #  - Partitioned DML does not guarantee exactly-once execution semantics
2569          #    against a partition. The statement will be applied at least once to each
2570          #    partition. It is strongly recommended that the DML statement should be
2571          #    idempotent to avoid unexpected results. For instance, it is potentially
2572          #    dangerous to run a statement such as
2573          #    `UPDATE table SET column = column + 1` as it could be run multiple times
2574          #    against some rows.
2575          #
2576          #  - The partitions are committed automatically - there is no support for
2577          #    Commit or Rollback. If the call returns an error, or if the client issuing
2578          #    the ExecuteSql call dies, it is possible that some rows had the statement
2579          #    executed on them successfully. It is also possible that statement was
2580          #    never executed against other rows.
2581          #
2582          #  - Partitioned DML transactions may only contain the execution of a single
2583          #    DML statement via ExecuteSql or ExecuteStreamingSql.
2584          #
2585          #  - If any error is encountered during the execution of the partitioned DML
2586          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
2587          #    value that cannot be stored due to schema constraints), then the
2588          #    operation is stopped at that point and an error is returned. It is
2589          #    possible that at this point, some partitions have been committed (or even
2590          #    committed multiple times), and other partitions have not been run at all.
2591          #
2592          # Given the above, Partitioned DML is good fit for large, database-wide,
2593          # operations that are idempotent, such as deleting old rows from a very large
2594          # table.
2595        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
2596            #
2597            # Authorization to begin a read-write transaction requires
2598            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
2599            # on the `session` resource.
2600            # transaction type has no options.
2601        },
2602        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
2603            #
2604            # Authorization to begin a read-only transaction requires
2605            # `spanner.databases.beginReadOnlyTransaction` permission
2606            # on the `session` resource.
2607          "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
2608              #
2609              # This is useful for requesting fresher data than some previous
2610              # read, or data that is fresh enough to observe the effects of some
2611              # previously committed transaction whose timestamp is known.
2612              #
2613              # Note that this option can only be used in single-use transactions.
2614              #
2615              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
2616              # Example: `"2014-10-02T15:01:23.045123456Z"`.
2617          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
2618              # the Transaction message that describes the transaction.
2619          "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
2620              # seconds. Guarantees that all writes that have committed more
2621              # than the specified number of seconds ago are visible. Because
2622              # Cloud Spanner chooses the exact timestamp, this mode works even if
2623              # the client's local clock is substantially skewed from Cloud Spanner
2624              # commit timestamps.
2625              #
2626              # Useful for reading the freshest data available at a nearby
2627              # replica, while bounding the possible staleness if the local
2628              # replica has fallen behind.
2629              #
2630              # Note that this option can only be used in single-use
2631              # transactions.
2632          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
2633              # old. The timestamp is chosen soon after the read is started.
2634              #
2635              # Guarantees that all writes that have committed more than the
2636              # specified number of seconds ago are visible. Because Cloud Spanner
2637              # chooses the exact timestamp, this mode works even if the client's
2638              # local clock is substantially skewed from Cloud Spanner commit
2639              # timestamps.
2640              #
2641              # Useful for reading at nearby replicas without the distributed
2642              # timestamp negotiation overhead of `max_staleness`.
2643          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
2644              # reads at a specific timestamp are repeatable; the same read at
2645              # the same timestamp always returns the same data. If the
2646              # timestamp is in the future, the read will block until the
2647              # specified timestamp, modulo the read's deadline.
2648              #
2649              # Useful for large scale consistent reads such as mapreduces, or
2650              # for coordinating many reads against a consistent snapshot of the
2651              # data.
2652              #
2653              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
2654              # Example: `"2014-10-02T15:01:23.045123456Z"`.
2655          "strong": True or False, # Read at a timestamp where all previously committed transactions
2656              # are visible.
2657        },
2658        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
2659            #
2660            # Authorization to begin a Partitioned DML transaction requires
2661            # `spanner.databases.beginPartitionedDmlTransaction` permission
2662            # on the `session` resource.
2663        },
2664      },
2665      "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
2666          # This is the most efficient way to execute a transaction that
2667          # consists of a single SQL query.
2668          #
2669          #
2670          # Each session can have at most one active transaction at a time. After the
2671          # active transaction is completed, the session can immediately be
2672          # re-used for the next transaction. It is not necessary to create a
2673          # new session for each transaction.
2674          #
2675          # # Transaction Modes
2676          #
2677          # Cloud Spanner supports three transaction modes:
2678          #
2679          #   1. Locking read-write. This type of transaction is the only way
2680          #      to write data into Cloud Spanner. These transactions rely on
2681          #      pessimistic locking and, if necessary, two-phase commit.
2682          #      Locking read-write transactions may abort, requiring the
2683          #      application to retry.
2684          #
2685          #   2. Snapshot read-only. This transaction type provides guaranteed
2686          #      consistency across several reads, but does not allow
2687          #      writes. Snapshot read-only transactions can be configured to
2688          #      read at timestamps in the past. Snapshot read-only
2689          #      transactions do not need to be committed.
2690          #
2691          #   3. Partitioned DML. This type of transaction is used to execute
2692          #      a single Partitioned DML statement. Partitioned DML partitions
2693          #      the key space and runs the DML statement over each partition
2694          #      in parallel using separate, internal transactions that commit
2695          #      independently. Partitioned DML transactions do not need to be
2696          #      committed.
2697          #
2698          # For transactions that only read, snapshot read-only transactions
2699          # provide simpler semantics and are almost always faster. In
2700          # particular, read-only transactions do not take locks, so they do
2701          # not conflict with read-write transactions. As a consequence of not
2702          # taking locks, they also do not abort, so retry loops are not needed.
2703          #
2704          # Transactions may only read/write data in a single database. They
2705          # may, however, read/write data in different tables within that
2706          # database.
2707          #
2708          # ## Locking Read-Write Transactions
2709          #
2710          # Locking transactions may be used to atomically read-modify-write
2711          # data anywhere in a database. This type of transaction is externally
2712          # consistent.
2713          #
2714          # Clients should attempt to minimize the amount of time a transaction
2715          # is active. Faster transactions commit with higher probability
2716          # and cause less contention. Cloud Spanner attempts to keep read locks
2717          # active as long as the transaction continues to do reads, and the
2718          # transaction has not been terminated by
2719          # Commit or
2720          # Rollback.  Long periods of
2721          # inactivity at the client may cause Cloud Spanner to release a
2722          # transaction's locks and abort it.
2723          #
2724          # Conceptually, a read-write transaction consists of zero or more
2725          # reads or SQL statements followed by
2726          # Commit. At any time before
2727          # Commit, the client can send a
2728          # Rollback request to abort the
2729          # transaction.
2730          #
2731          # ### Semantics
2732          #
2733          # Cloud Spanner can commit the transaction if all read locks it acquired
2734          # are still valid at commit time, and it is able to acquire write
2735          # locks for all writes. Cloud Spanner can abort the transaction for any
2736          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
2737          # that the transaction has not modified any user data in Cloud Spanner.
2738          #
2739          # Unless the transaction commits, Cloud Spanner makes no guarantees about
2740          # how long the transaction's locks were held for. It is an error to
2741          # use Cloud Spanner locks for any sort of mutual exclusion other than
2742          # between Cloud Spanner transactions themselves.
2743          #
2744          # ### Retrying Aborted Transactions
2745          #
2746          # When a transaction aborts, the application can choose to retry the
2747          # whole transaction again. To maximize the chances of successfully
2748          # committing the retry, the client should execute the retry in the
2749          # same session as the original attempt. The original session's lock
2750          # priority increases with each consecutive abort, meaning that each
2751          # attempt has a slightly better chance of success than the previous.
2752          #
2753          # Under some circumstances (e.g., many transactions attempting to
2754          # modify the same row(s)), a transaction can abort many times in a
2755          # short period before successfully committing. Thus, it is not a good
2756          # idea to cap the number of retries a transaction can attempt;
2757          # instead, it is better to limit the total amount of wall time spent
2758          # retrying.
2759          #
2760          # ### Idle Transactions
2761          #
2762          # A transaction is considered idle if it has no outstanding reads or
2763          # SQL queries and has not started a read or SQL query within the last 10
2764          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
2765          # don't hold on to locks indefinitely. In that case, the commit will
2766          # fail with error `ABORTED`.
2767          #
2768          # If this behavior is undesirable, periodically executing a simple
2769          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
2770          # transaction from becoming idle.
2771          #
2772          # ## Snapshot Read-Only Transactions
2773          #
2774          # Snapshot read-only transactions provides a simpler method than
2775          # locking read-write transactions for doing several consistent
2776          # reads. However, this type of transaction does not support writes.
2777          #
2778          # Snapshot transactions do not take locks. Instead, they work by
2779          # choosing a Cloud Spanner timestamp, then executing all reads at that
2780          # timestamp. Since they do not acquire locks, they do not block
2781          # concurrent read-write transactions.
2782          #
2783          # Unlike locking read-write transactions, snapshot read-only
2784          # transactions never abort. They can fail if the chosen read
2785          # timestamp is garbage collected; however, the default garbage
2786          # collection policy is generous enough that most applications do not
2787          # need to worry about this in practice.
2788          #
2789          # Snapshot read-only transactions do not need to call
2790          # Commit or
2791          # Rollback (and in fact are not
2792          # permitted to do so).
2793          #
2794          # To execute a snapshot transaction, the client specifies a timestamp
2795          # bound, which tells Cloud Spanner how to choose a read timestamp.
2796          #
2797          # The types of timestamp bound are:
2798          #
2799          #   - Strong (the default).
2800          #   - Bounded staleness.
2801          #   - Exact staleness.
2802          #
2803          # If the Cloud Spanner database to be read is geographically distributed,
2804          # stale read-only transactions can execute more quickly than strong
2805          # or read-write transaction, because they are able to execute far
2806          # from the leader replica.
2807          #
2808          # Each type of timestamp bound is discussed in detail below.
2809          #
2810          # ### Strong
2811          #
2812          # Strong reads are guaranteed to see the effects of all transactions
2813          # that have committed before the start of the read. Furthermore, all
2814          # rows yielded by a single read are consistent with each other -- if
2815          # any part of the read observes a transaction, all parts of the read
2816          # see the transaction.
2817          #
2818          # Strong reads are not repeatable: two consecutive strong read-only
2819          # transactions might return inconsistent results if there are
2820          # concurrent writes. If consistency across reads is required, the
2821          # reads should be executed within a transaction or at an exact read
2822          # timestamp.
2823          #
2824          # See TransactionOptions.ReadOnly.strong.
2825          #
2826          # ### Exact Staleness
2827          #
2828          # These timestamp bounds execute reads at a user-specified
2829          # timestamp. Reads at a timestamp are guaranteed to see a consistent
2830          # prefix of the global transaction history: they observe
2831          # modifications done by all transactions with a commit timestamp <=
2832          # the read timestamp, and observe none of the modifications done by
2833          # transactions with a larger commit timestamp. They will block until
2834          # all conflicting transactions that may be assigned commit timestamps
2835          # <= the read timestamp have finished.
2836          #
2837          # The timestamp can either be expressed as an absolute Cloud Spanner commit
2838          # timestamp or a staleness relative to the current time.
2839          #
2840          # These modes do not require a "negotiation phase" to pick a
2841          # timestamp. As a result, they execute slightly faster than the
2842          # equivalent boundedly stale concurrency modes. On the other hand,
2843          # boundedly stale reads usually return fresher results.
2844          #
2845          # See TransactionOptions.ReadOnly.read_timestamp and
2846          # TransactionOptions.ReadOnly.exact_staleness.
2847          #
2848          # ### Bounded Staleness
2849          #
2850          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
2851          # subject to a user-provided staleness bound. Cloud Spanner chooses the
2852          # newest timestamp within the staleness bound that allows execution
2853          # of the reads at the closest available replica without blocking.
2854          #
2855          # All rows yielded are consistent with each other -- if any part of
2856          # the read observes a transaction, all parts of the read see the
2857          # transaction. Boundedly stale reads are not repeatable: two stale
2858          # reads, even if they use the same staleness bound, can execute at
2859          # different timestamps and thus return inconsistent results.
2860          #
2861          # Boundedly stale reads execute in two phases: the first phase
2862          # negotiates a timestamp among all replicas needed to serve the
2863          # read. In the second phase, reads are executed at the negotiated
2864          # timestamp.
2865          #
2866          # As a result of the two phase execution, bounded staleness reads are
2867          # usually a little slower than comparable exact staleness
2868          # reads. However, they are typically able to return fresher
2869          # results, and are more likely to execute at the closest replica.
2870          #
2871          # Because the timestamp negotiation requires up-front knowledge of
2872          # which rows will be read, it can only be used with single-use
2873          # read-only transactions.
2874          #
2875          # See TransactionOptions.ReadOnly.max_staleness and
2876          # TransactionOptions.ReadOnly.min_read_timestamp.
2877          #
2878          # ### Old Read Timestamps and Garbage Collection
2879          #
2880          # Cloud Spanner continuously garbage collects deleted and overwritten data
2881          # in the background to reclaim storage space. This process is known
2882          # as "version GC". By default, version GC reclaims versions after they
2883          # are one hour old. Because of this, Cloud Spanner cannot perform reads
2884          # at read timestamps more than one hour in the past. This
2885          # restriction also applies to in-progress reads and/or SQL queries whose
2886          # timestamp become too old while executing. Reads and SQL queries with
2887          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
2888          #
2889          # ## Partitioned DML Transactions
2890          #
2891          # Partitioned DML transactions are used to execute DML statements with a
2892          # different execution strategy that provides different, and often better,
2893          # scalability properties for large, table-wide operations than DML in a
2894          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
2895          # should prefer using ReadWrite transactions.
2896          #
2897          # Partitioned DML partitions the keyspace and runs the DML statement on each
2898          # partition in separate, internal transactions. These transactions commit
2899          # automatically when complete, and run independently from one another.
2900          #
2901          # To reduce lock contention, this execution strategy only acquires read locks
2902          # on rows that match the WHERE clause of the statement. Additionally, the
2903          # smaller per-partition transactions hold locks for less time.
2904          #
2905          # That said, Partitioned DML is not a drop-in replacement for standard DML used
2906          # in ReadWrite transactions.
2907          #
2908          #  - The DML statement must be fully-partitionable. Specifically, the statement
2909          #    must be expressible as the union of many statements which each access only
2910          #    a single row of the table.
2911          #
2912          #  - The statement is not applied atomically to all rows of the table. Rather,
2913          #    the statement is applied atomically to partitions of the table, in
2914          #    independent transactions. Secondary index rows are updated atomically
2915          #    with the base table rows.
2916          #
2917          #  - Partitioned DML does not guarantee exactly-once execution semantics
2918          #    against a partition. The statement will be applied at least once to each
2919          #    partition. It is strongly recommended that the DML statement should be
2920          #    idempotent to avoid unexpected results. For instance, it is potentially
2921          #    dangerous to run a statement such as
2922          #    `UPDATE table SET column = column + 1` as it could be run multiple times
2923          #    against some rows.
2924          #
2925          #  - The partitions are committed automatically - there is no support for
2926          #    Commit or Rollback. If the call returns an error, or if the client issuing
2927          #    the ExecuteSql call dies, it is possible that some rows had the statement
2928          #    executed on them successfully. It is also possible that statement was
2929          #    never executed against other rows.
2930          #
2931          #  - Partitioned DML transactions may only contain the execution of a single
2932          #    DML statement via ExecuteSql or ExecuteStreamingSql.
2933          #
2934          #  - If any error is encountered during the execution of the partitioned DML
2935          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
2936          #    value that cannot be stored due to schema constraints), then the
2937          #    operation is stopped at that point and an error is returned. It is
2938          #    possible that at this point, some partitions have been committed (or even
2939          #    committed multiple times), and other partitions have not been run at all.
2940          #
2941          # Given the above, Partitioned DML is good fit for large, database-wide,
2942          # operations that are idempotent, such as deleting old rows from a very large
2943          # table.
2944        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
2945            #
2946            # Authorization to begin a read-write transaction requires
2947            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
2948            # on the `session` resource.
2949            # transaction type has no options.
2950        },
2951        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
2952            #
2953            # Authorization to begin a read-only transaction requires
2954            # `spanner.databases.beginReadOnlyTransaction` permission
2955            # on the `session` resource.
2956          "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
2957              #
2958              # This is useful for requesting fresher data than some previous
2959              # read, or data that is fresh enough to observe the effects of some
2960              # previously committed transaction whose timestamp is known.
2961              #
2962              # Note that this option can only be used in single-use transactions.
2963              #
2964              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
2965              # Example: `"2014-10-02T15:01:23.045123456Z"`.
2966          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
2967              # the Transaction message that describes the transaction.
2968          "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
2969              # seconds. Guarantees that all writes that have committed more
2970              # than the specified number of seconds ago are visible. Because
2971              # Cloud Spanner chooses the exact timestamp, this mode works even if
2972              # the client's local clock is substantially skewed from Cloud Spanner
2973              # commit timestamps.
2974              #
2975              # Useful for reading the freshest data available at a nearby
2976              # replica, while bounding the possible staleness if the local
2977              # replica has fallen behind.
2978              #
2979              # Note that this option can only be used in single-use
2980              # transactions.
2981          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
2982              # old. The timestamp is chosen soon after the read is started.
2983              #
2984              # Guarantees that all writes that have committed more than the
2985              # specified number of seconds ago are visible. Because Cloud Spanner
2986              # chooses the exact timestamp, this mode works even if the client's
2987              # local clock is substantially skewed from Cloud Spanner commit
2988              # timestamps.
2989              #
2990              # Useful for reading at nearby replicas without the distributed
2991              # timestamp negotiation overhead of `max_staleness`.
2992          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
2993              # reads at a specific timestamp are repeatable; the same read at
2994              # the same timestamp always returns the same data. If the
2995              # timestamp is in the future, the read will block until the
2996              # specified timestamp, modulo the read's deadline.
2997              #
2998              # Useful for large scale consistent reads such as mapreduces, or
2999              # for coordinating many reads against a consistent snapshot of the
3000              # data.
3001              #
3002              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
3003              # Example: `"2014-10-02T15:01:23.045123456Z"`.
3004          "strong": True or False, # Read at a timestamp where all previously committed transactions
3005              # are visible.
3006        },
3007        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
3008            #
3009            # Authorization to begin a Partitioned DML transaction requires
3010            # `spanner.databases.beginPartitionedDmlTransaction` permission
3011            # on the `session` resource.
3012        },
3013      },
3014      "id": "A String", # Execute the read or SQL query in a previously-started transaction.
3015    },
3016    "seqno": "A String", # A per-transaction sequence number used to identify this request. This
3017        # makes each request idempotent such that if the request is received multiple
3018        # times, at most one will succeed.
3019        #
3020        # The sequence number must be monotonically increasing within the
3021        # transaction. If a request arrives for the first time with an out-of-order
3022        # sequence number, the transaction may be aborted. Replays of previously
3023        # handled requests will yield the same response as the first execution.
3024        #
3025        # Required for DML statements. Ignored for queries.
3026    "resumeToken": "A String", # If this request is resuming a previously interrupted SQL statement
3027        # execution, `resume_token` should be copied from the last
3028        # PartialResultSet yielded before the interruption. Doing this
3029        # enables the new SQL statement execution to resume where the last one left
3030        # off. The rest of the request parameters must exactly match the
3031        # request that yielded this token.
3032    "partitionToken": "A String", # If present, results will be restricted to the specified partition
3033        # previously created using PartitionQuery().  There must be an exact
3034        # match for the values of fields common to this message and the
3035        # PartitionQueryRequest message used to create this partition_token.
3036    "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
3037        # from a JSON value.  For example, values of type `BYTES` and values
3038        # of type `STRING` both appear in params as JSON strings.
3039        #
3040        # In these cases, `param_types` can be used to specify the exact
3041        # SQL type for some or all of the SQL statement parameters. See the
3042        # definition of Type for more information
3043        # about SQL types.
3044      "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
3045          # table cell or returned from an SQL query.
3046        "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
3047            # provides type information for the struct's fields.
3048        "code": "A String", # Required. The TypeCode for this type.
3049        "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
3050            # is the type of the array elements.
3051      },
3052    },
3053    "queryMode": "A String", # Used to control the amount of debugging information returned in
3054        # ResultSetStats. If partition_token is set, query_mode can only
3055        # be set to QueryMode.NORMAL.
3056    "sql": "A String", # Required. The SQL string.
3057    "params": { # The SQL string can contain parameter placeholders. A parameter
3058        # placeholder consists of `'@'` followed by the parameter
3059        # name. Parameter names consist of any combination of letters,
3060        # numbers, and underscores.
3061        #
3062        # Parameters can appear anywhere that a literal value is expected.  The same
3063        # parameter name can be used more than once, for example:
3064        #   `"WHERE id > @msg_id AND id < @msg_id + 100"`
3065        #
3066        # It is an error to execute an SQL statement with unbound parameters.
3067        #
3068        # Parameter values are specified using `params`, which is a JSON
3069        # object whose keys are parameter names, and whose values are the
3070        # corresponding parameter values.
3071      "a_key": "", # Properties of the object.
3072    },
3073  }
3074
3075  x__xgafv: string, V1 error format.
3076    Allowed values
3077      1 - v1 error format
3078      2 - v2 error format
3079
3080Returns:
3081  An object of the form:
3082
3083    { # Results from Read or
3084      # ExecuteSql.
3085    "rows": [ # Each element in `rows` is a row whose format is defined by
3086        # metadata.row_type. The ith element
3087        # in each row matches the ith field in
3088        # metadata.row_type. Elements are
3089        # encoded based on type as described
3090        # here.
3091      [
3092        "",
3093      ],
3094    ],
3095    "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
3096        # produced this result set. These can be requested by setting
3097        # ExecuteSqlRequest.query_mode.
3098        # DML statements always produce stats containing the number of rows
3099        # modified, unless executed using the
3100        # ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
3101        # Other fields may or may not be populated, based on the
3102        # ExecuteSqlRequest.query_mode.
3103      "rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
3104          # returns a lower bound of the rows modified.
3105      "rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
3106      "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
3107        "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
3108            # with the plan root. Each PlanNode's `id` corresponds to its index in
3109            # `plan_nodes`.
3110          { # Node information for nodes appearing in a QueryPlan.plan_nodes.
3111            "index": 42, # The `PlanNode`'s index in node list.
3112            "kind": "A String", # Used to determine the type of node. May be needed for visualizing
3113                # different kinds of nodes differently. For example, If the node is a
3114                # SCALAR node, it will have a condensed representation
3115                # which can be used to directly embed a description of the node in its
3116                # parent.
3117            "displayName": "A String", # The display name for the node.
3118            "executionStats": { # The execution statistics associated with the node, contained in a group of
3119                # key-value pairs. Only present if the plan was returned as a result of a
3120                # profile query. For example, number of executions, number of rows/time per
3121                # execution etc.
3122              "a_key": "", # Properties of the object.
3123            },
3124            "childLinks": [ # List of child node `index`es and their relationship to this parent.
3125              { # Metadata associated with a parent-child relationship appearing in a
3126                  # PlanNode.
3127                "variable": "A String", # Only present if the child node is SCALAR and corresponds
3128                    # to an output variable of the parent node. The field carries the name of
3129                    # the output variable.
3130                    # For example, a `TableScan` operator that reads rows from a table will
3131                    # have child links to the `SCALAR` nodes representing the output variables
3132                    # created for each column that is read by the operator. The corresponding
3133                    # `variable` fields will be set to the variable names assigned to the
3134                    # columns.
3135                "childIndex": 42, # The node to which the link points.
3136                "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
3137                    # distinguish between the build child and the probe child, or in the case
3138                    # of the child being an output variable, to represent the tag associated
3139                    # with the output variable.
3140              },
3141            ],
3142            "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
3143                # `SCALAR` PlanNode(s).
3144              "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
3145                  # where the `description` string of this node references a `SCALAR`
3146                  # subquery contained in the expression subtree rooted at this node. The
3147                  # referenced `SCALAR` subquery may not necessarily be a direct child of
3148                  # this node.
3149                "a_key": 42,
3150              },
3151              "description": "A String", # A string representation of the expression subtree rooted at this node.
3152            },
3153            "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
3154                # For example, a Parameter Reference node could have the following
3155                # information in its metadata:
3156                #
3157                #     {
3158                #       "parameter_reference": "param1",
3159                #       "parameter_type": "array"
3160                #     }
3161              "a_key": "", # Properties of the object.
3162            },
3163          },
3164        ],
3165      },
3166      "queryStats": { # Aggregated statistics from the execution of the query. Only present when
3167          # the query is profiled. For example, a query could return the statistics as
3168          # follows:
3169          #
3170          #     {
3171          #       "rows_returned": "3",
3172          #       "elapsed_time": "1.22 secs",
3173          #       "cpu_time": "1.19 secs"
3174          #     }
3175        "a_key": "", # Properties of the object.
3176      },
3177    },
3178    "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
3179      "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
3180          # set.  For example, a SQL query like `"SELECT UserId, UserName FROM
3181          # Users"` could return a `row_type` value like:
3182          #
3183          #     "fields": [
3184          #       { "name": "UserId", "type": { "code": "INT64" } },
3185          #       { "name": "UserName", "type": { "code": "STRING" } },
3186          #     ]
3187        "fields": [ # The list of fields that make up this struct. Order is
3188            # significant, because values of this struct type are represented as
3189            # lists, where the order of field values matches the order of
3190            # fields in the StructType. In turn, the order of fields
3191            # matches the order of columns in a read request, or the order of
3192            # fields in the `SELECT` clause of a query.
3193          { # Message representing a single field of a struct.
3194            "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
3195                # table cell or returned from an SQL query.
3196              "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
3197                  # provides type information for the struct's fields.
3198              "code": "A String", # Required. The TypeCode for this type.
3199              "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
3200                  # is the type of the array elements.
3201            },
3202            "name": "A String", # The name of the field. For reads, this is the column name. For
3203                # SQL queries, it is the column alias (e.g., `"Word"` in the
3204                # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
3205                # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
3206                # columns might have an empty name (e.g., !"SELECT
3207                # UPPER(ColName)"`). Note that a query result can contain
3208                # multiple fields with the same name.
3209          },
3210        ],
3211      },
3212      "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
3213          # information about the new transaction is yielded here.
3214        "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
3215            # for the transaction. Not returned by default: see
3216            # TransactionOptions.ReadOnly.return_read_timestamp.
3217            #
3218            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
3219            # Example: `"2014-10-02T15:01:23.045123456Z"`.
3220        "id": "A String", # `id` may be used to identify the transaction in subsequent
3221            # Read,
3222            # ExecuteSql,
3223            # Commit, or
3224            # Rollback calls.
3225            #
3226            # Single-use read-only transactions do not have IDs, because
3227            # single-use transactions do not support multiple requests.
3228      },
3229    },
3230  }</pre>
3231</div>
3232
3233<div class="method">
3234    <code class="details" id="executeStreamingSql">executeStreamingSql(session, body, x__xgafv=None)</code>
3235  <pre>Like ExecuteSql, except returns the result
3236set as a stream. Unlike ExecuteSql, there
3237is no limit on the size of the returned result set. However, no
3238individual row in the result set can exceed 100 MiB, and no
3239column value can exceed 10 MiB.
3240
3241Args:
3242  session: string, Required. The session in which the SQL query should be performed. (required)
3243  body: object, The request body. (required)
3244    The object takes the form of:
3245
3246{ # The request for ExecuteSql and
3247      # ExecuteStreamingSql.
3248    "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
3249        # temporary read-only transaction with strong concurrency.
3250        #
3251        # The transaction to use.
3252        #
3253        # For queries, if none is provided, the default is a temporary read-only
3254        # transaction with strong concurrency.
3255        #
3256        # Standard DML statements require a ReadWrite transaction. Single-use
3257        # transactions are not supported (to avoid replay).  The caller must
3258        # either supply an existing transaction ID or begin a new transaction.
3259        #
3260        # Partitioned DML requires an existing PartitionedDml transaction ID.
3261        # Read or
3262        # ExecuteSql call runs.
3263        #
3264        # See TransactionOptions for more information about transactions.
3265      "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
3266          # it. The transaction ID of the new transaction is returned in
3267          # ResultSetMetadata.transaction, which is a Transaction.
3268          #
3269          #
3270          # Each session can have at most one active transaction at a time. After the
3271          # active transaction is completed, the session can immediately be
3272          # re-used for the next transaction. It is not necessary to create a
3273          # new session for each transaction.
3274          #
3275          # # Transaction Modes
3276          #
3277          # Cloud Spanner supports three transaction modes:
3278          #
3279          #   1. Locking read-write. This type of transaction is the only way
3280          #      to write data into Cloud Spanner. These transactions rely on
3281          #      pessimistic locking and, if necessary, two-phase commit.
3282          #      Locking read-write transactions may abort, requiring the
3283          #      application to retry.
3284          #
3285          #   2. Snapshot read-only. This transaction type provides guaranteed
3286          #      consistency across several reads, but does not allow
3287          #      writes. Snapshot read-only transactions can be configured to
3288          #      read at timestamps in the past. Snapshot read-only
3289          #      transactions do not need to be committed.
3290          #
3291          #   3. Partitioned DML. This type of transaction is used to execute
3292          #      a single Partitioned DML statement. Partitioned DML partitions
3293          #      the key space and runs the DML statement over each partition
3294          #      in parallel using separate, internal transactions that commit
3295          #      independently. Partitioned DML transactions do not need to be
3296          #      committed.
3297          #
3298          # For transactions that only read, snapshot read-only transactions
3299          # provide simpler semantics and are almost always faster. In
3300          # particular, read-only transactions do not take locks, so they do
3301          # not conflict with read-write transactions. As a consequence of not
3302          # taking locks, they also do not abort, so retry loops are not needed.
3303          #
3304          # Transactions may only read/write data in a single database. They
3305          # may, however, read/write data in different tables within that
3306          # database.
3307          #
3308          # ## Locking Read-Write Transactions
3309          #
3310          # Locking transactions may be used to atomically read-modify-write
3311          # data anywhere in a database. This type of transaction is externally
3312          # consistent.
3313          #
3314          # Clients should attempt to minimize the amount of time a transaction
3315          # is active. Faster transactions commit with higher probability
3316          # and cause less contention. Cloud Spanner attempts to keep read locks
3317          # active as long as the transaction continues to do reads, and the
3318          # transaction has not been terminated by
3319          # Commit or
3320          # Rollback.  Long periods of
3321          # inactivity at the client may cause Cloud Spanner to release a
3322          # transaction's locks and abort it.
3323          #
3324          # Conceptually, a read-write transaction consists of zero or more
3325          # reads or SQL statements followed by
3326          # Commit. At any time before
3327          # Commit, the client can send a
3328          # Rollback request to abort the
3329          # transaction.
3330          #
3331          # ### Semantics
3332          #
3333          # Cloud Spanner can commit the transaction if all read locks it acquired
3334          # are still valid at commit time, and it is able to acquire write
3335          # locks for all writes. Cloud Spanner can abort the transaction for any
3336          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
3337          # that the transaction has not modified any user data in Cloud Spanner.
3338          #
3339          # Unless the transaction commits, Cloud Spanner makes no guarantees about
3340          # how long the transaction's locks were held for. It is an error to
3341          # use Cloud Spanner locks for any sort of mutual exclusion other than
3342          # between Cloud Spanner transactions themselves.
3343          #
3344          # ### Retrying Aborted Transactions
3345          #
3346          # When a transaction aborts, the application can choose to retry the
3347          # whole transaction again. To maximize the chances of successfully
3348          # committing the retry, the client should execute the retry in the
3349          # same session as the original attempt. The original session's lock
3350          # priority increases with each consecutive abort, meaning that each
3351          # attempt has a slightly better chance of success than the previous.
3352          #
3353          # Under some circumstances (e.g., many transactions attempting to
3354          # modify the same row(s)), a transaction can abort many times in a
3355          # short period before successfully committing. Thus, it is not a good
3356          # idea to cap the number of retries a transaction can attempt;
3357          # instead, it is better to limit the total amount of wall time spent
3358          # retrying.
3359          #
3360          # ### Idle Transactions
3361          #
3362          # A transaction is considered idle if it has no outstanding reads or
3363          # SQL queries and has not started a read or SQL query within the last 10
3364          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
3365          # don't hold on to locks indefinitely. In that case, the commit will
3366          # fail with error `ABORTED`.
3367          #
3368          # If this behavior is undesirable, periodically executing a simple
3369          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
3370          # transaction from becoming idle.
3371          #
3372          # ## Snapshot Read-Only Transactions
3373          #
3374          # Snapshot read-only transactions provides a simpler method than
3375          # locking read-write transactions for doing several consistent
3376          # reads. However, this type of transaction does not support writes.
3377          #
3378          # Snapshot transactions do not take locks. Instead, they work by
3379          # choosing a Cloud Spanner timestamp, then executing all reads at that
3380          # timestamp. Since they do not acquire locks, they do not block
3381          # concurrent read-write transactions.
3382          #
3383          # Unlike locking read-write transactions, snapshot read-only
3384          # transactions never abort. They can fail if the chosen read
3385          # timestamp is garbage collected; however, the default garbage
3386          # collection policy is generous enough that most applications do not
3387          # need to worry about this in practice.
3388          #
3389          # Snapshot read-only transactions do not need to call
3390          # Commit or
3391          # Rollback (and in fact are not
3392          # permitted to do so).
3393          #
3394          # To execute a snapshot transaction, the client specifies a timestamp
3395          # bound, which tells Cloud Spanner how to choose a read timestamp.
3396          #
3397          # The types of timestamp bound are:
3398          #
3399          #   - Strong (the default).
3400          #   - Bounded staleness.
3401          #   - Exact staleness.
3402          #
3403          # If the Cloud Spanner database to be read is geographically distributed,
3404          # stale read-only transactions can execute more quickly than strong
3405          # or read-write transaction, because they are able to execute far
3406          # from the leader replica.
3407          #
3408          # Each type of timestamp bound is discussed in detail below.
3409          #
3410          # ### Strong
3411          #
3412          # Strong reads are guaranteed to see the effects of all transactions
3413          # that have committed before the start of the read. Furthermore, all
3414          # rows yielded by a single read are consistent with each other -- if
3415          # any part of the read observes a transaction, all parts of the read
3416          # see the transaction.
3417          #
3418          # Strong reads are not repeatable: two consecutive strong read-only
3419          # transactions might return inconsistent results if there are
3420          # concurrent writes. If consistency across reads is required, the
3421          # reads should be executed within a transaction or at an exact read
3422          # timestamp.
3423          #
3424          # See TransactionOptions.ReadOnly.strong.
3425          #
3426          # ### Exact Staleness
3427          #
3428          # These timestamp bounds execute reads at a user-specified
3429          # timestamp. Reads at a timestamp are guaranteed to see a consistent
3430          # prefix of the global transaction history: they observe
3431          # modifications done by all transactions with a commit timestamp <=
3432          # the read timestamp, and observe none of the modifications done by
3433          # transactions with a larger commit timestamp. They will block until
3434          # all conflicting transactions that may be assigned commit timestamps
3435          # <= the read timestamp have finished.
3436          #
3437          # The timestamp can either be expressed as an absolute Cloud Spanner commit
3438          # timestamp or a staleness relative to the current time.
3439          #
3440          # These modes do not require a "negotiation phase" to pick a
3441          # timestamp. As a result, they execute slightly faster than the
3442          # equivalent boundedly stale concurrency modes. On the other hand,
3443          # boundedly stale reads usually return fresher results.
3444          #
3445          # See TransactionOptions.ReadOnly.read_timestamp and
3446          # TransactionOptions.ReadOnly.exact_staleness.
3447          #
3448          # ### Bounded Staleness
3449          #
3450          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
3451          # subject to a user-provided staleness bound. Cloud Spanner chooses the
3452          # newest timestamp within the staleness bound that allows execution
3453          # of the reads at the closest available replica without blocking.
3454          #
3455          # All rows yielded are consistent with each other -- if any part of
3456          # the read observes a transaction, all parts of the read see the
3457          # transaction. Boundedly stale reads are not repeatable: two stale
3458          # reads, even if they use the same staleness bound, can execute at
3459          # different timestamps and thus return inconsistent results.
3460          #
3461          # Boundedly stale reads execute in two phases: the first phase
3462          # negotiates a timestamp among all replicas needed to serve the
3463          # read. In the second phase, reads are executed at the negotiated
3464          # timestamp.
3465          #
3466          # As a result of the two phase execution, bounded staleness reads are
3467          # usually a little slower than comparable exact staleness
3468          # reads. However, they are typically able to return fresher
3469          # results, and are more likely to execute at the closest replica.
3470          #
3471          # Because the timestamp negotiation requires up-front knowledge of
3472          # which rows will be read, it can only be used with single-use
3473          # read-only transactions.
3474          #
3475          # See TransactionOptions.ReadOnly.max_staleness and
3476          # TransactionOptions.ReadOnly.min_read_timestamp.
3477          #
3478          # ### Old Read Timestamps and Garbage Collection
3479          #
3480          # Cloud Spanner continuously garbage collects deleted and overwritten data
3481          # in the background to reclaim storage space. This process is known
3482          # as "version GC". By default, version GC reclaims versions after they
3483          # are one hour old. Because of this, Cloud Spanner cannot perform reads
3484          # at read timestamps more than one hour in the past. This
3485          # restriction also applies to in-progress reads and/or SQL queries whose
3486          # timestamp become too old while executing. Reads and SQL queries with
3487          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
3488          #
3489          # ## Partitioned DML Transactions
3490          #
3491          # Partitioned DML transactions are used to execute DML statements with a
3492          # different execution strategy that provides different, and often better,
3493          # scalability properties for large, table-wide operations than DML in a
3494          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
3495          # should prefer using ReadWrite transactions.
3496          #
3497          # Partitioned DML partitions the keyspace and runs the DML statement on each
3498          # partition in separate, internal transactions. These transactions commit
3499          # automatically when complete, and run independently from one another.
3500          #
3501          # To reduce lock contention, this execution strategy only acquires read locks
3502          # on rows that match the WHERE clause of the statement. Additionally, the
3503          # smaller per-partition transactions hold locks for less time.
3504          #
3505          # That said, Partitioned DML is not a drop-in replacement for standard DML used
3506          # in ReadWrite transactions.
3507          #
3508          #  - The DML statement must be fully-partitionable. Specifically, the statement
3509          #    must be expressible as the union of many statements which each access only
3510          #    a single row of the table.
3511          #
3512          #  - The statement is not applied atomically to all rows of the table. Rather,
3513          #    the statement is applied atomically to partitions of the table, in
3514          #    independent transactions. Secondary index rows are updated atomically
3515          #    with the base table rows.
3516          #
3517          #  - Partitioned DML does not guarantee exactly-once execution semantics
3518          #    against a partition. The statement will be applied at least once to each
3519          #    partition. It is strongly recommended that the DML statement should be
3520          #    idempotent to avoid unexpected results. For instance, it is potentially
3521          #    dangerous to run a statement such as
3522          #    `UPDATE table SET column = column + 1` as it could be run multiple times
3523          #    against some rows.
3524          #
3525          #  - The partitions are committed automatically - there is no support for
3526          #    Commit or Rollback. If the call returns an error, or if the client issuing
3527          #    the ExecuteSql call dies, it is possible that some rows had the statement
3528          #    executed on them successfully. It is also possible that statement was
3529          #    never executed against other rows.
3530          #
3531          #  - Partitioned DML transactions may only contain the execution of a single
3532          #    DML statement via ExecuteSql or ExecuteStreamingSql.
3533          #
3534          #  - If any error is encountered during the execution of the partitioned DML
3535          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
3536          #    value that cannot be stored due to schema constraints), then the
3537          #    operation is stopped at that point and an error is returned. It is
3538          #    possible that at this point, some partitions have been committed (or even
3539          #    committed multiple times), and other partitions have not been run at all.
3540          #
3541          # Given the above, Partitioned DML is good fit for large, database-wide,
3542          # operations that are idempotent, such as deleting old rows from a very large
3543          # table.
3544        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
3545            #
3546            # Authorization to begin a read-write transaction requires
3547            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
3548            # on the `session` resource.
3549            # transaction type has no options.
3550        },
3551        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
3552            #
3553            # Authorization to begin a read-only transaction requires
3554            # `spanner.databases.beginReadOnlyTransaction` permission
3555            # on the `session` resource.
3556          "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
3557              #
3558              # This is useful for requesting fresher data than some previous
3559              # read, or data that is fresh enough to observe the effects of some
3560              # previously committed transaction whose timestamp is known.
3561              #
3562              # Note that this option can only be used in single-use transactions.
3563              #
3564              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
3565              # Example: `"2014-10-02T15:01:23.045123456Z"`.
3566          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
3567              # the Transaction message that describes the transaction.
3568          "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
3569              # seconds. Guarantees that all writes that have committed more
3570              # than the specified number of seconds ago are visible. Because
3571              # Cloud Spanner chooses the exact timestamp, this mode works even if
3572              # the client's local clock is substantially skewed from Cloud Spanner
3573              # commit timestamps.
3574              #
3575              # Useful for reading the freshest data available at a nearby
3576              # replica, while bounding the possible staleness if the local
3577              # replica has fallen behind.
3578              #
3579              # Note that this option can only be used in single-use
3580              # transactions.
3581          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
3582              # old. The timestamp is chosen soon after the read is started.
3583              #
3584              # Guarantees that all writes that have committed more than the
3585              # specified number of seconds ago are visible. Because Cloud Spanner
3586              # chooses the exact timestamp, this mode works even if the client's
3587              # local clock is substantially skewed from Cloud Spanner commit
3588              # timestamps.
3589              #
3590              # Useful for reading at nearby replicas without the distributed
3591              # timestamp negotiation overhead of `max_staleness`.
3592          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
3593              # reads at a specific timestamp are repeatable; the same read at
3594              # the same timestamp always returns the same data. If the
3595              # timestamp is in the future, the read will block until the
3596              # specified timestamp, modulo the read's deadline.
3597              #
3598              # Useful for large scale consistent reads such as mapreduces, or
3599              # for coordinating many reads against a consistent snapshot of the
3600              # data.
3601              #
3602              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
3603              # Example: `"2014-10-02T15:01:23.045123456Z"`.
3604          "strong": True or False, # Read at a timestamp where all previously committed transactions
3605              # are visible.
3606        },
3607        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
3608            #
3609            # Authorization to begin a Partitioned DML transaction requires
3610            # `spanner.databases.beginPartitionedDmlTransaction` permission
3611            # on the `session` resource.
3612        },
3613      },
3614      "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
3615          # This is the most efficient way to execute a transaction that
3616          # consists of a single SQL query.
3617          #
3618          #
3619          # Each session can have at most one active transaction at a time. After the
3620          # active transaction is completed, the session can immediately be
3621          # re-used for the next transaction. It is not necessary to create a
3622          # new session for each transaction.
3623          #
3624          # # Transaction Modes
3625          #
3626          # Cloud Spanner supports three transaction modes:
3627          #
3628          #   1. Locking read-write. This type of transaction is the only way
3629          #      to write data into Cloud Spanner. These transactions rely on
3630          #      pessimistic locking and, if necessary, two-phase commit.
3631          #      Locking read-write transactions may abort, requiring the
3632          #      application to retry.
3633          #
3634          #   2. Snapshot read-only. This transaction type provides guaranteed
3635          #      consistency across several reads, but does not allow
3636          #      writes. Snapshot read-only transactions can be configured to
3637          #      read at timestamps in the past. Snapshot read-only
3638          #      transactions do not need to be committed.
3639          #
3640          #   3. Partitioned DML. This type of transaction is used to execute
3641          #      a single Partitioned DML statement. Partitioned DML partitions
3642          #      the key space and runs the DML statement over each partition
3643          #      in parallel using separate, internal transactions that commit
3644          #      independently. Partitioned DML transactions do not need to be
3645          #      committed.
3646          #
3647          # For transactions that only read, snapshot read-only transactions
3648          # provide simpler semantics and are almost always faster. In
3649          # particular, read-only transactions do not take locks, so they do
3650          # not conflict with read-write transactions. As a consequence of not
3651          # taking locks, they also do not abort, so retry loops are not needed.
3652          #
3653          # Transactions may only read/write data in a single database. They
3654          # may, however, read/write data in different tables within that
3655          # database.
3656          #
3657          # ## Locking Read-Write Transactions
3658          #
3659          # Locking transactions may be used to atomically read-modify-write
3660          # data anywhere in a database. This type of transaction is externally
3661          # consistent.
3662          #
3663          # Clients should attempt to minimize the amount of time a transaction
3664          # is active. Faster transactions commit with higher probability
3665          # and cause less contention. Cloud Spanner attempts to keep read locks
3666          # active as long as the transaction continues to do reads, and the
3667          # transaction has not been terminated by
3668          # Commit or
3669          # Rollback.  Long periods of
3670          # inactivity at the client may cause Cloud Spanner to release a
3671          # transaction's locks and abort it.
3672          #
3673          # Conceptually, a read-write transaction consists of zero or more
3674          # reads or SQL statements followed by
3675          # Commit. At any time before
3676          # Commit, the client can send a
3677          # Rollback request to abort the
3678          # transaction.
3679          #
3680          # ### Semantics
3681          #
3682          # Cloud Spanner can commit the transaction if all read locks it acquired
3683          # are still valid at commit time, and it is able to acquire write
3684          # locks for all writes. Cloud Spanner can abort the transaction for any
3685          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
3686          # that the transaction has not modified any user data in Cloud Spanner.
3687          #
3688          # Unless the transaction commits, Cloud Spanner makes no guarantees about
3689          # how long the transaction's locks were held for. It is an error to
3690          # use Cloud Spanner locks for any sort of mutual exclusion other than
3691          # between Cloud Spanner transactions themselves.
3692          #
3693          # ### Retrying Aborted Transactions
3694          #
3695          # When a transaction aborts, the application can choose to retry the
3696          # whole transaction again. To maximize the chances of successfully
3697          # committing the retry, the client should execute the retry in the
3698          # same session as the original attempt. The original session's lock
3699          # priority increases with each consecutive abort, meaning that each
3700          # attempt has a slightly better chance of success than the previous.
3701          #
3702          # Under some circumstances (e.g., many transactions attempting to
3703          # modify the same row(s)), a transaction can abort many times in a
3704          # short period before successfully committing. Thus, it is not a good
3705          # idea to cap the number of retries a transaction can attempt;
3706          # instead, it is better to limit the total amount of wall time spent
3707          # retrying.
3708          #
3709          # ### Idle Transactions
3710          #
3711          # A transaction is considered idle if it has no outstanding reads or
3712          # SQL queries and has not started a read or SQL query within the last 10
3713          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
3714          # don't hold on to locks indefinitely. In that case, the commit will
3715          # fail with error `ABORTED`.
3716          #
3717          # If this behavior is undesirable, periodically executing a simple
3718          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
3719          # transaction from becoming idle.
3720          #
3721          # ## Snapshot Read-Only Transactions
3722          #
3723          # Snapshot read-only transactions provides a simpler method than
3724          # locking read-write transactions for doing several consistent
3725          # reads. However, this type of transaction does not support writes.
3726          #
3727          # Snapshot transactions do not take locks. Instead, they work by
3728          # choosing a Cloud Spanner timestamp, then executing all reads at that
3729          # timestamp. Since they do not acquire locks, they do not block
3730          # concurrent read-write transactions.
3731          #
3732          # Unlike locking read-write transactions, snapshot read-only
3733          # transactions never abort. They can fail if the chosen read
3734          # timestamp is garbage collected; however, the default garbage
3735          # collection policy is generous enough that most applications do not
3736          # need to worry about this in practice.
3737          #
3738          # Snapshot read-only transactions do not need to call
3739          # Commit or
3740          # Rollback (and in fact are not
3741          # permitted to do so).
3742          #
3743          # To execute a snapshot transaction, the client specifies a timestamp
3744          # bound, which tells Cloud Spanner how to choose a read timestamp.
3745          #
3746          # The types of timestamp bound are:
3747          #
3748          #   - Strong (the default).
3749          #   - Bounded staleness.
3750          #   - Exact staleness.
3751          #
3752          # If the Cloud Spanner database to be read is geographically distributed,
3753          # stale read-only transactions can execute more quickly than strong
3754          # or read-write transaction, because they are able to execute far
3755          # from the leader replica.
3756          #
3757          # Each type of timestamp bound is discussed in detail below.
3758          #
3759          # ### Strong
3760          #
3761          # Strong reads are guaranteed to see the effects of all transactions
3762          # that have committed before the start of the read. Furthermore, all
3763          # rows yielded by a single read are consistent with each other -- if
3764          # any part of the read observes a transaction, all parts of the read
3765          # see the transaction.
3766          #
3767          # Strong reads are not repeatable: two consecutive strong read-only
3768          # transactions might return inconsistent results if there are
3769          # concurrent writes. If consistency across reads is required, the
3770          # reads should be executed within a transaction or at an exact read
3771          # timestamp.
3772          #
3773          # See TransactionOptions.ReadOnly.strong.
3774          #
3775          # ### Exact Staleness
3776          #
3777          # These timestamp bounds execute reads at a user-specified
3778          # timestamp. Reads at a timestamp are guaranteed to see a consistent
3779          # prefix of the global transaction history: they observe
3780          # modifications done by all transactions with a commit timestamp <=
3781          # the read timestamp, and observe none of the modifications done by
3782          # transactions with a larger commit timestamp. They will block until
3783          # all conflicting transactions that may be assigned commit timestamps
3784          # <= the read timestamp have finished.
3785          #
3786          # The timestamp can either be expressed as an absolute Cloud Spanner commit
3787          # timestamp or a staleness relative to the current time.
3788          #
3789          # These modes do not require a "negotiation phase" to pick a
3790          # timestamp. As a result, they execute slightly faster than the
3791          # equivalent boundedly stale concurrency modes. On the other hand,
3792          # boundedly stale reads usually return fresher results.
3793          #
3794          # See TransactionOptions.ReadOnly.read_timestamp and
3795          # TransactionOptions.ReadOnly.exact_staleness.
3796          #
3797          # ### Bounded Staleness
3798          #
3799          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
3800          # subject to a user-provided staleness bound. Cloud Spanner chooses the
3801          # newest timestamp within the staleness bound that allows execution
3802          # of the reads at the closest available replica without blocking.
3803          #
3804          # All rows yielded are consistent with each other -- if any part of
3805          # the read observes a transaction, all parts of the read see the
3806          # transaction. Boundedly stale reads are not repeatable: two stale
3807          # reads, even if they use the same staleness bound, can execute at
3808          # different timestamps and thus return inconsistent results.
3809          #
3810          # Boundedly stale reads execute in two phases: the first phase
3811          # negotiates a timestamp among all replicas needed to serve the
3812          # read. In the second phase, reads are executed at the negotiated
3813          # timestamp.
3814          #
3815          # As a result of the two phase execution, bounded staleness reads are
3816          # usually a little slower than comparable exact staleness
3817          # reads. However, they are typically able to return fresher
3818          # results, and are more likely to execute at the closest replica.
3819          #
3820          # Because the timestamp negotiation requires up-front knowledge of
3821          # which rows will be read, it can only be used with single-use
3822          # read-only transactions.
3823          #
3824          # See TransactionOptions.ReadOnly.max_staleness and
3825          # TransactionOptions.ReadOnly.min_read_timestamp.
3826          #
3827          # ### Old Read Timestamps and Garbage Collection
3828          #
3829          # Cloud Spanner continuously garbage collects deleted and overwritten data
3830          # in the background to reclaim storage space. This process is known
3831          # as "version GC". By default, version GC reclaims versions after they
3832          # are one hour old. Because of this, Cloud Spanner cannot perform reads
3833          # at read timestamps more than one hour in the past. This
3834          # restriction also applies to in-progress reads and/or SQL queries whose
3835          # timestamp become too old while executing. Reads and SQL queries with
3836          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
3837          #
3838          # ## Partitioned DML Transactions
3839          #
3840          # Partitioned DML transactions are used to execute DML statements with a
3841          # different execution strategy that provides different, and often better,
3842          # scalability properties for large, table-wide operations than DML in a
3843          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
3844          # should prefer using ReadWrite transactions.
3845          #
3846          # Partitioned DML partitions the keyspace and runs the DML statement on each
3847          # partition in separate, internal transactions. These transactions commit
3848          # automatically when complete, and run independently from one another.
3849          #
3850          # To reduce lock contention, this execution strategy only acquires read locks
3851          # on rows that match the WHERE clause of the statement. Additionally, the
3852          # smaller per-partition transactions hold locks for less time.
3853          #
3854          # That said, Partitioned DML is not a drop-in replacement for standard DML used
3855          # in ReadWrite transactions.
3856          #
3857          #  - The DML statement must be fully-partitionable. Specifically, the statement
3858          #    must be expressible as the union of many statements which each access only
3859          #    a single row of the table.
3860          #
3861          #  - The statement is not applied atomically to all rows of the table. Rather,
3862          #    the statement is applied atomically to partitions of the table, in
3863          #    independent transactions. Secondary index rows are updated atomically
3864          #    with the base table rows.
3865          #
3866          #  - Partitioned DML does not guarantee exactly-once execution semantics
3867          #    against a partition. The statement will be applied at least once to each
3868          #    partition. It is strongly recommended that the DML statement should be
3869          #    idempotent to avoid unexpected results. For instance, it is potentially
3870          #    dangerous to run a statement such as
3871          #    `UPDATE table SET column = column + 1` as it could be run multiple times
3872          #    against some rows.
3873          #
3874          #  - The partitions are committed automatically - there is no support for
3875          #    Commit or Rollback. If the call returns an error, or if the client issuing
3876          #    the ExecuteSql call dies, it is possible that some rows had the statement
3877          #    executed on them successfully. It is also possible that statement was
3878          #    never executed against other rows.
3879          #
3880          #  - Partitioned DML transactions may only contain the execution of a single
3881          #    DML statement via ExecuteSql or ExecuteStreamingSql.
3882          #
3883          #  - If any error is encountered during the execution of the partitioned DML
3884          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
3885          #    value that cannot be stored due to schema constraints), then the
3886          #    operation is stopped at that point and an error is returned. It is
3887          #    possible that at this point, some partitions have been committed (or even
3888          #    committed multiple times), and other partitions have not been run at all.
3889          #
3890          # Given the above, Partitioned DML is good fit for large, database-wide,
3891          # operations that are idempotent, such as deleting old rows from a very large
3892          # table.
3893        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
3894            #
3895            # Authorization to begin a read-write transaction requires
3896            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
3897            # on the `session` resource.
3898            # transaction type has no options.
3899        },
3900        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
3901            #
3902            # Authorization to begin a read-only transaction requires
3903            # `spanner.databases.beginReadOnlyTransaction` permission
3904            # on the `session` resource.
3905          "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
3906              #
3907              # This is useful for requesting fresher data than some previous
3908              # read, or data that is fresh enough to observe the effects of some
3909              # previously committed transaction whose timestamp is known.
3910              #
3911              # Note that this option can only be used in single-use transactions.
3912              #
3913              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
3914              # Example: `"2014-10-02T15:01:23.045123456Z"`.
3915          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
3916              # the Transaction message that describes the transaction.
3917          "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
3918              # seconds. Guarantees that all writes that have committed more
3919              # than the specified number of seconds ago are visible. Because
3920              # Cloud Spanner chooses the exact timestamp, this mode works even if
3921              # the client's local clock is substantially skewed from Cloud Spanner
3922              # commit timestamps.
3923              #
3924              # Useful for reading the freshest data available at a nearby
3925              # replica, while bounding the possible staleness if the local
3926              # replica has fallen behind.
3927              #
3928              # Note that this option can only be used in single-use
3929              # transactions.
3930          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
3931              # old. The timestamp is chosen soon after the read is started.
3932              #
3933              # Guarantees that all writes that have committed more than the
3934              # specified number of seconds ago are visible. Because Cloud Spanner
3935              # chooses the exact timestamp, this mode works even if the client's
3936              # local clock is substantially skewed from Cloud Spanner commit
3937              # timestamps.
3938              #
3939              # Useful for reading at nearby replicas without the distributed
3940              # timestamp negotiation overhead of `max_staleness`.
3941          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
3942              # reads at a specific timestamp are repeatable; the same read at
3943              # the same timestamp always returns the same data. If the
3944              # timestamp is in the future, the read will block until the
3945              # specified timestamp, modulo the read's deadline.
3946              #
3947              # Useful for large scale consistent reads such as mapreduces, or
3948              # for coordinating many reads against a consistent snapshot of the
3949              # data.
3950              #
3951              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
3952              # Example: `"2014-10-02T15:01:23.045123456Z"`.
3953          "strong": True or False, # Read at a timestamp where all previously committed transactions
3954              # are visible.
3955        },
3956        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
3957            #
3958            # Authorization to begin a Partitioned DML transaction requires
3959            # `spanner.databases.beginPartitionedDmlTransaction` permission
3960            # on the `session` resource.
3961        },
3962      },
3963      "id": "A String", # Execute the read or SQL query in a previously-started transaction.
3964    },
3965    "seqno": "A String", # A per-transaction sequence number used to identify this request. This
3966        # makes each request idempotent such that if the request is received multiple
3967        # times, at most one will succeed.
3968        #
3969        # The sequence number must be monotonically increasing within the
3970        # transaction. If a request arrives for the first time with an out-of-order
3971        # sequence number, the transaction may be aborted. Replays of previously
3972        # handled requests will yield the same response as the first execution.
3973        #
3974        # Required for DML statements. Ignored for queries.
3975    "resumeToken": "A String", # If this request is resuming a previously interrupted SQL statement
3976        # execution, `resume_token` should be copied from the last
3977        # PartialResultSet yielded before the interruption. Doing this
3978        # enables the new SQL statement execution to resume where the last one left
3979        # off. The rest of the request parameters must exactly match the
3980        # request that yielded this token.
3981    "partitionToken": "A String", # If present, results will be restricted to the specified partition
3982        # previously created using PartitionQuery().  There must be an exact
3983        # match for the values of fields common to this message and the
3984        # PartitionQueryRequest message used to create this partition_token.
3985    "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
3986        # from a JSON value.  For example, values of type `BYTES` and values
3987        # of type `STRING` both appear in params as JSON strings.
3988        #
3989        # In these cases, `param_types` can be used to specify the exact
3990        # SQL type for some or all of the SQL statement parameters. See the
3991        # definition of Type for more information
3992        # about SQL types.
3993      "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
3994          # table cell or returned from an SQL query.
3995        "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
3996            # provides type information for the struct's fields.
3997        "code": "A String", # Required. The TypeCode for this type.
3998        "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
3999            # is the type of the array elements.
4000      },
4001    },
4002    "queryMode": "A String", # Used to control the amount of debugging information returned in
4003        # ResultSetStats. If partition_token is set, query_mode can only
4004        # be set to QueryMode.NORMAL.
4005    "sql": "A String", # Required. The SQL string.
4006    "params": { # The SQL string can contain parameter placeholders. A parameter
4007        # placeholder consists of `'@'` followed by the parameter
4008        # name. Parameter names consist of any combination of letters,
4009        # numbers, and underscores.
4010        #
4011        # Parameters can appear anywhere that a literal value is expected.  The same
4012        # parameter name can be used more than once, for example:
4013        #   `"WHERE id > @msg_id AND id < @msg_id + 100"`
4014        #
4015        # It is an error to execute an SQL statement with unbound parameters.
4016        #
4017        # Parameter values are specified using `params`, which is a JSON
4018        # object whose keys are parameter names, and whose values are the
4019        # corresponding parameter values.
4020      "a_key": "", # Properties of the object.
4021    },
4022  }
4023
4024  x__xgafv: string, V1 error format.
4025    Allowed values
4026      1 - v1 error format
4027      2 - v2 error format
4028
4029Returns:
4030  An object of the form:
4031
4032    { # Partial results from a streaming read or SQL query. Streaming reads and
4033      # SQL queries better tolerate large result sets, large rows, and large
4034      # values, but are a little trickier to consume.
4035    "resumeToken": "A String", # Streaming calls might be interrupted for a variety of reasons, such
4036        # as TCP connection loss. If this occurs, the stream of results can
4037        # be resumed by re-sending the original request and including
4038        # `resume_token`. Note that executing any other transaction in the
4039        # same session invalidates the token.
4040    "chunkedValue": True or False, # If true, then the final value in values is chunked, and must
4041        # be combined with more values from subsequent `PartialResultSet`s
4042        # to obtain a complete field value.
4043    "values": [ # A streamed result set consists of a stream of values, which might
4044        # be split into many `PartialResultSet` messages to accommodate
4045        # large rows and/or large values. Every N complete values defines a
4046        # row, where N is equal to the number of entries in
4047        # metadata.row_type.fields.
4048        #
4049        # Most values are encoded based on type as described
4050        # here.
4051        #
4052        # It is possible that the last value in values is "chunked",
4053        # meaning that the rest of the value is sent in subsequent
4054        # `PartialResultSet`(s). This is denoted by the chunked_value
4055        # field. Two or more chunked values can be merged to form a
4056        # complete value as follows:
4057        #
4058        #   * `bool/number/null`: cannot be chunked
4059        #   * `string`: concatenate the strings
4060        #   * `list`: concatenate the lists. If the last element in a list is a
4061        #     `string`, `list`, or `object`, merge it with the first element in
4062        #     the next list by applying these rules recursively.
4063        #   * `object`: concatenate the (field name, field value) pairs. If a
4064        #     field name is duplicated, then apply these rules recursively
4065        #     to merge the field values.
4066        #
4067        # Some examples of merging:
4068        #
4069        #     # Strings are concatenated.
4070        #     "foo", "bar" => "foobar"
4071        #
4072        #     # Lists of non-strings are concatenated.
4073        #     [2, 3], [4] => [2, 3, 4]
4074        #
4075        #     # Lists are concatenated, but the last and first elements are merged
4076        #     # because they are strings.
4077        #     ["a", "b"], ["c", "d"] => ["a", "bc", "d"]
4078        #
4079        #     # Lists are concatenated, but the last and first elements are merged
4080        #     # because they are lists. Recursively, the last and first elements
4081        #     # of the inner lists are merged because they are strings.
4082        #     ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"]
4083        #
4084        #     # Non-overlapping object fields are combined.
4085        #     {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"}
4086        #
4087        #     # Overlapping object fields are merged.
4088        #     {"a": "1"}, {"a": "2"} => {"a": "12"}
4089        #
4090        #     # Examples of merging objects containing lists of strings.
4091        #     {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]}
4092        #
4093        # For a more complete example, suppose a streaming SQL query is
4094        # yielding a result set whose rows contain a single string
4095        # field. The following `PartialResultSet`s might be yielded:
4096        #
4097        #     {
4098        #       "metadata": { ... }
4099        #       "values": ["Hello", "W"]
4100        #       "chunked_value": true
4101        #       "resume_token": "Af65..."
4102        #     }
4103        #     {
4104        #       "values": ["orl"]
4105        #       "chunked_value": true
4106        #       "resume_token": "Bqp2..."
4107        #     }
4108        #     {
4109        #       "values": ["d"]
4110        #       "resume_token": "Zx1B..."
4111        #     }
4112        #
4113        # This sequence of `PartialResultSet`s encodes two rows, one
4114        # containing the field value `"Hello"`, and a second containing the
4115        # field value `"World" = "W" + "orl" + "d"`.
4116      "",
4117    ],
4118    "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the statement that produced this
4119        # streaming result set. These can be requested by setting
4120        # ExecuteSqlRequest.query_mode and are sent
4121        # only once with the last response in the stream.
4122        # This field will also be present in the last response for DML
4123        # statements.
4124      "rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
4125          # returns a lower bound of the rows modified.
4126      "rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
4127      "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
4128        "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
4129            # with the plan root. Each PlanNode's `id` corresponds to its index in
4130            # `plan_nodes`.
4131          { # Node information for nodes appearing in a QueryPlan.plan_nodes.
4132            "index": 42, # The `PlanNode`'s index in node list.
4133            "kind": "A String", # Used to determine the type of node. May be needed for visualizing
4134                # different kinds of nodes differently. For example, If the node is a
4135                # SCALAR node, it will have a condensed representation
4136                # which can be used to directly embed a description of the node in its
4137                # parent.
4138            "displayName": "A String", # The display name for the node.
4139            "executionStats": { # The execution statistics associated with the node, contained in a group of
4140                # key-value pairs. Only present if the plan was returned as a result of a
4141                # profile query. For example, number of executions, number of rows/time per
4142                # execution etc.
4143              "a_key": "", # Properties of the object.
4144            },
4145            "childLinks": [ # List of child node `index`es and their relationship to this parent.
4146              { # Metadata associated with a parent-child relationship appearing in a
4147                  # PlanNode.
4148                "variable": "A String", # Only present if the child node is SCALAR and corresponds
4149                    # to an output variable of the parent node. The field carries the name of
4150                    # the output variable.
4151                    # For example, a `TableScan` operator that reads rows from a table will
4152                    # have child links to the `SCALAR` nodes representing the output variables
4153                    # created for each column that is read by the operator. The corresponding
4154                    # `variable` fields will be set to the variable names assigned to the
4155                    # columns.
4156                "childIndex": 42, # The node to which the link points.
4157                "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
4158                    # distinguish between the build child and the probe child, or in the case
4159                    # of the child being an output variable, to represent the tag associated
4160                    # with the output variable.
4161              },
4162            ],
4163            "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
4164                # `SCALAR` PlanNode(s).
4165              "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
4166                  # where the `description` string of this node references a `SCALAR`
4167                  # subquery contained in the expression subtree rooted at this node. The
4168                  # referenced `SCALAR` subquery may not necessarily be a direct child of
4169                  # this node.
4170                "a_key": 42,
4171              },
4172              "description": "A String", # A string representation of the expression subtree rooted at this node.
4173            },
4174            "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
4175                # For example, a Parameter Reference node could have the following
4176                # information in its metadata:
4177                #
4178                #     {
4179                #       "parameter_reference": "param1",
4180                #       "parameter_type": "array"
4181                #     }
4182              "a_key": "", # Properties of the object.
4183            },
4184          },
4185        ],
4186      },
4187      "queryStats": { # Aggregated statistics from the execution of the query. Only present when
4188          # the query is profiled. For example, a query could return the statistics as
4189          # follows:
4190          #
4191          #     {
4192          #       "rows_returned": "3",
4193          #       "elapsed_time": "1.22 secs",
4194          #       "cpu_time": "1.19 secs"
4195          #     }
4196        "a_key": "", # Properties of the object.
4197      },
4198    },
4199    "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
4200        # Only present in the first response.
4201      "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
4202          # set.  For example, a SQL query like `"SELECT UserId, UserName FROM
4203          # Users"` could return a `row_type` value like:
4204          #
4205          #     "fields": [
4206          #       { "name": "UserId", "type": { "code": "INT64" } },
4207          #       { "name": "UserName", "type": { "code": "STRING" } },
4208          #     ]
4209        "fields": [ # The list of fields that make up this struct. Order is
4210            # significant, because values of this struct type are represented as
4211            # lists, where the order of field values matches the order of
4212            # fields in the StructType. In turn, the order of fields
4213            # matches the order of columns in a read request, or the order of
4214            # fields in the `SELECT` clause of a query.
4215          { # Message representing a single field of a struct.
4216            "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
4217                # table cell or returned from an SQL query.
4218              "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
4219                  # provides type information for the struct's fields.
4220              "code": "A String", # Required. The TypeCode for this type.
4221              "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
4222                  # is the type of the array elements.
4223            },
4224            "name": "A String", # The name of the field. For reads, this is the column name. For
4225                # SQL queries, it is the column alias (e.g., `"Word"` in the
4226                # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
4227                # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
4228                # columns might have an empty name (e.g., !"SELECT
4229                # UPPER(ColName)"`). Note that a query result can contain
4230                # multiple fields with the same name.
4231          },
4232        ],
4233      },
4234      "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
4235          # information about the new transaction is yielded here.
4236        "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
4237            # for the transaction. Not returned by default: see
4238            # TransactionOptions.ReadOnly.return_read_timestamp.
4239            #
4240            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
4241            # Example: `"2014-10-02T15:01:23.045123456Z"`.
4242        "id": "A String", # `id` may be used to identify the transaction in subsequent
4243            # Read,
4244            # ExecuteSql,
4245            # Commit, or
4246            # Rollback calls.
4247            #
4248            # Single-use read-only transactions do not have IDs, because
4249            # single-use transactions do not support multiple requests.
4250      },
4251    },
4252  }</pre>
4253</div>
4254
4255<div class="method">
4256    <code class="details" id="get">get(name, x__xgafv=None)</code>
4257  <pre>Gets a session. Returns `NOT_FOUND` if the session does not exist.
4258This is mainly useful for determining whether a session is still
4259alive.
4260
4261Args:
4262  name: string, Required. The name of the session to retrieve. (required)
4263  x__xgafv: string, V1 error format.
4264    Allowed values
4265      1 - v1 error format
4266      2 - v2 error format
4267
4268Returns:
4269  An object of the form:
4270
4271    { # A session in the Cloud Spanner API.
4272    "labels": { # The labels for the session.
4273        #
4274        #  * Label keys must be between 1 and 63 characters long and must conform to
4275        #    the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
4276        #  * Label values must be between 0 and 63 characters long and must conform
4277        #    to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
4278        #  * No more than 64 labels can be associated with a given session.
4279        #
4280        # See https://goo.gl/xmQnxf for more information on and examples of labels.
4281      "a_key": "A String",
4282    },
4283    "name": "A String", # The name of the session. This is always system-assigned; values provided
4284        # when creating a session are ignored.
4285    "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
4286        # typically earlier than the actual last use time.
4287    "createTime": "A String", # Output only. The timestamp when the session is created.
4288  }</pre>
4289</div>
4290
4291<div class="method">
4292    <code class="details" id="list">list(database, pageSize=None, filter=None, pageToken=None, x__xgafv=None)</code>
4293  <pre>Lists all sessions in a given database.
4294
4295Args:
4296  database: string, Required. The database in which to list sessions. (required)
4297  pageSize: integer, Number of sessions to be returned in the response. If 0 or less, defaults
4298to the server's maximum allowed page size.
4299  filter: string, An expression for filtering the results of the request. Filter rules are
4300case insensitive. The fields eligible for filtering are:
4301
4302  * `labels.key` where key is the name of a label
4303
4304Some examples of using filters are:
4305
4306  * `labels.env:*` --> The session has the label "env".
4307  * `labels.env:dev` --> The session has the label "env" and the value of
4308                       the label contains the string "dev".
4309  pageToken: string, If non-empty, `page_token` should contain a
4310next_page_token from a previous
4311ListSessionsResponse.
4312  x__xgafv: string, V1 error format.
4313    Allowed values
4314      1 - v1 error format
4315      2 - v2 error format
4316
4317Returns:
4318  An object of the form:
4319
4320    { # The response for ListSessions.
4321    "nextPageToken": "A String", # `next_page_token` can be sent in a subsequent
4322        # ListSessions call to fetch more of the matching
4323        # sessions.
4324    "sessions": [ # The list of requested sessions.
4325      { # A session in the Cloud Spanner API.
4326        "labels": { # The labels for the session.
4327            #
4328            #  * Label keys must be between 1 and 63 characters long and must conform to
4329            #    the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
4330            #  * Label values must be between 0 and 63 characters long and must conform
4331            #    to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
4332            #  * No more than 64 labels can be associated with a given session.
4333            #
4334            # See https://goo.gl/xmQnxf for more information on and examples of labels.
4335          "a_key": "A String",
4336        },
4337        "name": "A String", # The name of the session. This is always system-assigned; values provided
4338            # when creating a session are ignored.
4339        "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
4340            # typically earlier than the actual last use time.
4341        "createTime": "A String", # Output only. The timestamp when the session is created.
4342      },
4343    ],
4344  }</pre>
4345</div>
4346
4347<div class="method">
4348    <code class="details" id="list_next">list_next(previous_request, previous_response)</code>
4349  <pre>Retrieves the next page of results.
4350
4351Args:
4352  previous_request: The request for the previous page. (required)
4353  previous_response: The response from the request for the previous page. (required)
4354
4355Returns:
4356  A request object that you can call 'execute()' on to request the next
4357  page. Returns None if there are no more items in the collection.
4358    </pre>
4359</div>
4360
4361<div class="method">
4362    <code class="details" id="partitionQuery">partitionQuery(session, body, x__xgafv=None)</code>
4363  <pre>Creates a set of partition tokens that can be used to execute a query
4364operation in parallel.  Each of the returned partition tokens can be used
4365by ExecuteStreamingSql to specify a subset
4366of the query result to read.  The same session and read-only transaction
4367must be used by the PartitionQueryRequest used to create the
4368partition tokens and the ExecuteSqlRequests that use the partition tokens.
4369
4370Partition tokens become invalid when the session used to create them
4371is deleted, is idle for too long, begins a new transaction, or becomes too
4372old.  When any of these happen, it is not possible to resume the query, and
4373the whole operation must be restarted from the beginning.
4374
4375Args:
4376  session: string, Required. The session used to create the partitions. (required)
4377  body: object, The request body. (required)
4378    The object takes the form of:
4379
4380{ # The request for PartitionQuery
4381    "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
4382        # from a JSON value.  For example, values of type `BYTES` and values
4383        # of type `STRING` both appear in params as JSON strings.
4384        #
4385        # In these cases, `param_types` can be used to specify the exact
4386        # SQL type for some or all of the SQL query parameters. See the
4387        # definition of Type for more information
4388        # about SQL types.
4389      "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
4390          # table cell or returned from an SQL query.
4391        "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
4392            # provides type information for the struct's fields.
4393        "code": "A String", # Required. The TypeCode for this type.
4394        "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
4395            # is the type of the array elements.
4396      },
4397    },
4398    "partitionOptions": { # Options for a PartitionQueryRequest and # Additional options that affect how many partitions are created.
4399        # PartitionReadRequest.
4400      "maxPartitions": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
4401          # PartitionRead requests.
4402          #
4403          # The desired maximum number of partitions to return.  For example, this may
4404          # be set to the number of workers available.  The default for this option
4405          # is currently 10,000. The maximum value is currently 200,000.  This is only
4406          # a hint.  The actual number of partitions returned may be smaller or larger
4407          # than this maximum count request.
4408      "partitionSizeBytes": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
4409          # PartitionRead requests.
4410          #
4411          # The desired data size for each partition generated.  The default for this
4412          # option is currently 1 GiB.  This is only a hint. The actual size of each
4413          # partition may be smaller or larger than this size request.
4414    },
4415    "transaction": { # This message is used to select the transaction in which a # Read only snapshot transactions are supported, read/write and single use
4416        # transactions are not.
4417        # Read or
4418        # ExecuteSql call runs.
4419        #
4420        # See TransactionOptions for more information about transactions.
4421      "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
4422          # it. The transaction ID of the new transaction is returned in
4423          # ResultSetMetadata.transaction, which is a Transaction.
4424          #
4425          #
4426          # Each session can have at most one active transaction at a time. After the
4427          # active transaction is completed, the session can immediately be
4428          # re-used for the next transaction. It is not necessary to create a
4429          # new session for each transaction.
4430          #
4431          # # Transaction Modes
4432          #
4433          # Cloud Spanner supports three transaction modes:
4434          #
4435          #   1. Locking read-write. This type of transaction is the only way
4436          #      to write data into Cloud Spanner. These transactions rely on
4437          #      pessimistic locking and, if necessary, two-phase commit.
4438          #      Locking read-write transactions may abort, requiring the
4439          #      application to retry.
4440          #
4441          #   2. Snapshot read-only. This transaction type provides guaranteed
4442          #      consistency across several reads, but does not allow
4443          #      writes. Snapshot read-only transactions can be configured to
4444          #      read at timestamps in the past. Snapshot read-only
4445          #      transactions do not need to be committed.
4446          #
4447          #   3. Partitioned DML. This type of transaction is used to execute
4448          #      a single Partitioned DML statement. Partitioned DML partitions
4449          #      the key space and runs the DML statement over each partition
4450          #      in parallel using separate, internal transactions that commit
4451          #      independently. Partitioned DML transactions do not need to be
4452          #      committed.
4453          #
4454          # For transactions that only read, snapshot read-only transactions
4455          # provide simpler semantics and are almost always faster. In
4456          # particular, read-only transactions do not take locks, so they do
4457          # not conflict with read-write transactions. As a consequence of not
4458          # taking locks, they also do not abort, so retry loops are not needed.
4459          #
4460          # Transactions may only read/write data in a single database. They
4461          # may, however, read/write data in different tables within that
4462          # database.
4463          #
4464          # ## Locking Read-Write Transactions
4465          #
4466          # Locking transactions may be used to atomically read-modify-write
4467          # data anywhere in a database. This type of transaction is externally
4468          # consistent.
4469          #
4470          # Clients should attempt to minimize the amount of time a transaction
4471          # is active. Faster transactions commit with higher probability
4472          # and cause less contention. Cloud Spanner attempts to keep read locks
4473          # active as long as the transaction continues to do reads, and the
4474          # transaction has not been terminated by
4475          # Commit or
4476          # Rollback.  Long periods of
4477          # inactivity at the client may cause Cloud Spanner to release a
4478          # transaction's locks and abort it.
4479          #
4480          # Conceptually, a read-write transaction consists of zero or more
4481          # reads or SQL statements followed by
4482          # Commit. At any time before
4483          # Commit, the client can send a
4484          # Rollback request to abort the
4485          # transaction.
4486          #
4487          # ### Semantics
4488          #
4489          # Cloud Spanner can commit the transaction if all read locks it acquired
4490          # are still valid at commit time, and it is able to acquire write
4491          # locks for all writes. Cloud Spanner can abort the transaction for any
4492          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
4493          # that the transaction has not modified any user data in Cloud Spanner.
4494          #
4495          # Unless the transaction commits, Cloud Spanner makes no guarantees about
4496          # how long the transaction's locks were held for. It is an error to
4497          # use Cloud Spanner locks for any sort of mutual exclusion other than
4498          # between Cloud Spanner transactions themselves.
4499          #
4500          # ### Retrying Aborted Transactions
4501          #
4502          # When a transaction aborts, the application can choose to retry the
4503          # whole transaction again. To maximize the chances of successfully
4504          # committing the retry, the client should execute the retry in the
4505          # same session as the original attempt. The original session's lock
4506          # priority increases with each consecutive abort, meaning that each
4507          # attempt has a slightly better chance of success than the previous.
4508          #
4509          # Under some circumstances (e.g., many transactions attempting to
4510          # modify the same row(s)), a transaction can abort many times in a
4511          # short period before successfully committing. Thus, it is not a good
4512          # idea to cap the number of retries a transaction can attempt;
4513          # instead, it is better to limit the total amount of wall time spent
4514          # retrying.
4515          #
4516          # ### Idle Transactions
4517          #
4518          # A transaction is considered idle if it has no outstanding reads or
4519          # SQL queries and has not started a read or SQL query within the last 10
4520          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
4521          # don't hold on to locks indefinitely. In that case, the commit will
4522          # fail with error `ABORTED`.
4523          #
4524          # If this behavior is undesirable, periodically executing a simple
4525          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
4526          # transaction from becoming idle.
4527          #
4528          # ## Snapshot Read-Only Transactions
4529          #
4530          # Snapshot read-only transactions provides a simpler method than
4531          # locking read-write transactions for doing several consistent
4532          # reads. However, this type of transaction does not support writes.
4533          #
4534          # Snapshot transactions do not take locks. Instead, they work by
4535          # choosing a Cloud Spanner timestamp, then executing all reads at that
4536          # timestamp. Since they do not acquire locks, they do not block
4537          # concurrent read-write transactions.
4538          #
4539          # Unlike locking read-write transactions, snapshot read-only
4540          # transactions never abort. They can fail if the chosen read
4541          # timestamp is garbage collected; however, the default garbage
4542          # collection policy is generous enough that most applications do not
4543          # need to worry about this in practice.
4544          #
4545          # Snapshot read-only transactions do not need to call
4546          # Commit or
4547          # Rollback (and in fact are not
4548          # permitted to do so).
4549          #
4550          # To execute a snapshot transaction, the client specifies a timestamp
4551          # bound, which tells Cloud Spanner how to choose a read timestamp.
4552          #
4553          # The types of timestamp bound are:
4554          #
4555          #   - Strong (the default).
4556          #   - Bounded staleness.
4557          #   - Exact staleness.
4558          #
4559          # If the Cloud Spanner database to be read is geographically distributed,
4560          # stale read-only transactions can execute more quickly than strong
4561          # or read-write transaction, because they are able to execute far
4562          # from the leader replica.
4563          #
4564          # Each type of timestamp bound is discussed in detail below.
4565          #
4566          # ### Strong
4567          #
4568          # Strong reads are guaranteed to see the effects of all transactions
4569          # that have committed before the start of the read. Furthermore, all
4570          # rows yielded by a single read are consistent with each other -- if
4571          # any part of the read observes a transaction, all parts of the read
4572          # see the transaction.
4573          #
4574          # Strong reads are not repeatable: two consecutive strong read-only
4575          # transactions might return inconsistent results if there are
4576          # concurrent writes. If consistency across reads is required, the
4577          # reads should be executed within a transaction or at an exact read
4578          # timestamp.
4579          #
4580          # See TransactionOptions.ReadOnly.strong.
4581          #
4582          # ### Exact Staleness
4583          #
4584          # These timestamp bounds execute reads at a user-specified
4585          # timestamp. Reads at a timestamp are guaranteed to see a consistent
4586          # prefix of the global transaction history: they observe
4587          # modifications done by all transactions with a commit timestamp <=
4588          # the read timestamp, and observe none of the modifications done by
4589          # transactions with a larger commit timestamp. They will block until
4590          # all conflicting transactions that may be assigned commit timestamps
4591          # <= the read timestamp have finished.
4592          #
4593          # The timestamp can either be expressed as an absolute Cloud Spanner commit
4594          # timestamp or a staleness relative to the current time.
4595          #
4596          # These modes do not require a "negotiation phase" to pick a
4597          # timestamp. As a result, they execute slightly faster than the
4598          # equivalent boundedly stale concurrency modes. On the other hand,
4599          # boundedly stale reads usually return fresher results.
4600          #
4601          # See TransactionOptions.ReadOnly.read_timestamp and
4602          # TransactionOptions.ReadOnly.exact_staleness.
4603          #
4604          # ### Bounded Staleness
4605          #
4606          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
4607          # subject to a user-provided staleness bound. Cloud Spanner chooses the
4608          # newest timestamp within the staleness bound that allows execution
4609          # of the reads at the closest available replica without blocking.
4610          #
4611          # All rows yielded are consistent with each other -- if any part of
4612          # the read observes a transaction, all parts of the read see the
4613          # transaction. Boundedly stale reads are not repeatable: two stale
4614          # reads, even if they use the same staleness bound, can execute at
4615          # different timestamps and thus return inconsistent results.
4616          #
4617          # Boundedly stale reads execute in two phases: the first phase
4618          # negotiates a timestamp among all replicas needed to serve the
4619          # read. In the second phase, reads are executed at the negotiated
4620          # timestamp.
4621          #
4622          # As a result of the two phase execution, bounded staleness reads are
4623          # usually a little slower than comparable exact staleness
4624          # reads. However, they are typically able to return fresher
4625          # results, and are more likely to execute at the closest replica.
4626          #
4627          # Because the timestamp negotiation requires up-front knowledge of
4628          # which rows will be read, it can only be used with single-use
4629          # read-only transactions.
4630          #
4631          # See TransactionOptions.ReadOnly.max_staleness and
4632          # TransactionOptions.ReadOnly.min_read_timestamp.
4633          #
4634          # ### Old Read Timestamps and Garbage Collection
4635          #
4636          # Cloud Spanner continuously garbage collects deleted and overwritten data
4637          # in the background to reclaim storage space. This process is known
4638          # as "version GC". By default, version GC reclaims versions after they
4639          # are one hour old. Because of this, Cloud Spanner cannot perform reads
4640          # at read timestamps more than one hour in the past. This
4641          # restriction also applies to in-progress reads and/or SQL queries whose
4642          # timestamp become too old while executing. Reads and SQL queries with
4643          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
4644          #
4645          # ## Partitioned DML Transactions
4646          #
4647          # Partitioned DML transactions are used to execute DML statements with a
4648          # different execution strategy that provides different, and often better,
4649          # scalability properties for large, table-wide operations than DML in a
4650          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
4651          # should prefer using ReadWrite transactions.
4652          #
4653          # Partitioned DML partitions the keyspace and runs the DML statement on each
4654          # partition in separate, internal transactions. These transactions commit
4655          # automatically when complete, and run independently from one another.
4656          #
4657          # To reduce lock contention, this execution strategy only acquires read locks
4658          # on rows that match the WHERE clause of the statement. Additionally, the
4659          # smaller per-partition transactions hold locks for less time.
4660          #
4661          # That said, Partitioned DML is not a drop-in replacement for standard DML used
4662          # in ReadWrite transactions.
4663          #
4664          #  - The DML statement must be fully-partitionable. Specifically, the statement
4665          #    must be expressible as the union of many statements which each access only
4666          #    a single row of the table.
4667          #
4668          #  - The statement is not applied atomically to all rows of the table. Rather,
4669          #    the statement is applied atomically to partitions of the table, in
4670          #    independent transactions. Secondary index rows are updated atomically
4671          #    with the base table rows.
4672          #
4673          #  - Partitioned DML does not guarantee exactly-once execution semantics
4674          #    against a partition. The statement will be applied at least once to each
4675          #    partition. It is strongly recommended that the DML statement should be
4676          #    idempotent to avoid unexpected results. For instance, it is potentially
4677          #    dangerous to run a statement such as
4678          #    `UPDATE table SET column = column + 1` as it could be run multiple times
4679          #    against some rows.
4680          #
4681          #  - The partitions are committed automatically - there is no support for
4682          #    Commit or Rollback. If the call returns an error, or if the client issuing
4683          #    the ExecuteSql call dies, it is possible that some rows had the statement
4684          #    executed on them successfully. It is also possible that statement was
4685          #    never executed against other rows.
4686          #
4687          #  - Partitioned DML transactions may only contain the execution of a single
4688          #    DML statement via ExecuteSql or ExecuteStreamingSql.
4689          #
4690          #  - If any error is encountered during the execution of the partitioned DML
4691          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
4692          #    value that cannot be stored due to schema constraints), then the
4693          #    operation is stopped at that point and an error is returned. It is
4694          #    possible that at this point, some partitions have been committed (or even
4695          #    committed multiple times), and other partitions have not been run at all.
4696          #
4697          # Given the above, Partitioned DML is good fit for large, database-wide,
4698          # operations that are idempotent, such as deleting old rows from a very large
4699          # table.
4700        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
4701            #
4702            # Authorization to begin a read-write transaction requires
4703            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
4704            # on the `session` resource.
4705            # transaction type has no options.
4706        },
4707        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
4708            #
4709            # Authorization to begin a read-only transaction requires
4710            # `spanner.databases.beginReadOnlyTransaction` permission
4711            # on the `session` resource.
4712          "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
4713              #
4714              # This is useful for requesting fresher data than some previous
4715              # read, or data that is fresh enough to observe the effects of some
4716              # previously committed transaction whose timestamp is known.
4717              #
4718              # Note that this option can only be used in single-use transactions.
4719              #
4720              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
4721              # Example: `"2014-10-02T15:01:23.045123456Z"`.
4722          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
4723              # the Transaction message that describes the transaction.
4724          "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
4725              # seconds. Guarantees that all writes that have committed more
4726              # than the specified number of seconds ago are visible. Because
4727              # Cloud Spanner chooses the exact timestamp, this mode works even if
4728              # the client's local clock is substantially skewed from Cloud Spanner
4729              # commit timestamps.
4730              #
4731              # Useful for reading the freshest data available at a nearby
4732              # replica, while bounding the possible staleness if the local
4733              # replica has fallen behind.
4734              #
4735              # Note that this option can only be used in single-use
4736              # transactions.
4737          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
4738              # old. The timestamp is chosen soon after the read is started.
4739              #
4740              # Guarantees that all writes that have committed more than the
4741              # specified number of seconds ago are visible. Because Cloud Spanner
4742              # chooses the exact timestamp, this mode works even if the client's
4743              # local clock is substantially skewed from Cloud Spanner commit
4744              # timestamps.
4745              #
4746              # Useful for reading at nearby replicas without the distributed
4747              # timestamp negotiation overhead of `max_staleness`.
4748          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
4749              # reads at a specific timestamp are repeatable; the same read at
4750              # the same timestamp always returns the same data. If the
4751              # timestamp is in the future, the read will block until the
4752              # specified timestamp, modulo the read's deadline.
4753              #
4754              # Useful for large scale consistent reads such as mapreduces, or
4755              # for coordinating many reads against a consistent snapshot of the
4756              # data.
4757              #
4758              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
4759              # Example: `"2014-10-02T15:01:23.045123456Z"`.
4760          "strong": True or False, # Read at a timestamp where all previously committed transactions
4761              # are visible.
4762        },
4763        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
4764            #
4765            # Authorization to begin a Partitioned DML transaction requires
4766            # `spanner.databases.beginPartitionedDmlTransaction` permission
4767            # on the `session` resource.
4768        },
4769      },
4770      "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
4771          # This is the most efficient way to execute a transaction that
4772          # consists of a single SQL query.
4773          #
4774          #
4775          # Each session can have at most one active transaction at a time. After the
4776          # active transaction is completed, the session can immediately be
4777          # re-used for the next transaction. It is not necessary to create a
4778          # new session for each transaction.
4779          #
4780          # # Transaction Modes
4781          #
4782          # Cloud Spanner supports three transaction modes:
4783          #
4784          #   1. Locking read-write. This type of transaction is the only way
4785          #      to write data into Cloud Spanner. These transactions rely on
4786          #      pessimistic locking and, if necessary, two-phase commit.
4787          #      Locking read-write transactions may abort, requiring the
4788          #      application to retry.
4789          #
4790          #   2. Snapshot read-only. This transaction type provides guaranteed
4791          #      consistency across several reads, but does not allow
4792          #      writes. Snapshot read-only transactions can be configured to
4793          #      read at timestamps in the past. Snapshot read-only
4794          #      transactions do not need to be committed.
4795          #
4796          #   3. Partitioned DML. This type of transaction is used to execute
4797          #      a single Partitioned DML statement. Partitioned DML partitions
4798          #      the key space and runs the DML statement over each partition
4799          #      in parallel using separate, internal transactions that commit
4800          #      independently. Partitioned DML transactions do not need to be
4801          #      committed.
4802          #
4803          # For transactions that only read, snapshot read-only transactions
4804          # provide simpler semantics and are almost always faster. In
4805          # particular, read-only transactions do not take locks, so they do
4806          # not conflict with read-write transactions. As a consequence of not
4807          # taking locks, they also do not abort, so retry loops are not needed.
4808          #
4809          # Transactions may only read/write data in a single database. They
4810          # may, however, read/write data in different tables within that
4811          # database.
4812          #
4813          # ## Locking Read-Write Transactions
4814          #
4815          # Locking transactions may be used to atomically read-modify-write
4816          # data anywhere in a database. This type of transaction is externally
4817          # consistent.
4818          #
4819          # Clients should attempt to minimize the amount of time a transaction
4820          # is active. Faster transactions commit with higher probability
4821          # and cause less contention. Cloud Spanner attempts to keep read locks
4822          # active as long as the transaction continues to do reads, and the
4823          # transaction has not been terminated by
4824          # Commit or
4825          # Rollback.  Long periods of
4826          # inactivity at the client may cause Cloud Spanner to release a
4827          # transaction's locks and abort it.
4828          #
4829          # Conceptually, a read-write transaction consists of zero or more
4830          # reads or SQL statements followed by
4831          # Commit. At any time before
4832          # Commit, the client can send a
4833          # Rollback request to abort the
4834          # transaction.
4835          #
4836          # ### Semantics
4837          #
4838          # Cloud Spanner can commit the transaction if all read locks it acquired
4839          # are still valid at commit time, and it is able to acquire write
4840          # locks for all writes. Cloud Spanner can abort the transaction for any
4841          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
4842          # that the transaction has not modified any user data in Cloud Spanner.
4843          #
4844          # Unless the transaction commits, Cloud Spanner makes no guarantees about
4845          # how long the transaction's locks were held for. It is an error to
4846          # use Cloud Spanner locks for any sort of mutual exclusion other than
4847          # between Cloud Spanner transactions themselves.
4848          #
4849          # ### Retrying Aborted Transactions
4850          #
4851          # When a transaction aborts, the application can choose to retry the
4852          # whole transaction again. To maximize the chances of successfully
4853          # committing the retry, the client should execute the retry in the
4854          # same session as the original attempt. The original session's lock
4855          # priority increases with each consecutive abort, meaning that each
4856          # attempt has a slightly better chance of success than the previous.
4857          #
4858          # Under some circumstances (e.g., many transactions attempting to
4859          # modify the same row(s)), a transaction can abort many times in a
4860          # short period before successfully committing. Thus, it is not a good
4861          # idea to cap the number of retries a transaction can attempt;
4862          # instead, it is better to limit the total amount of wall time spent
4863          # retrying.
4864          #
4865          # ### Idle Transactions
4866          #
4867          # A transaction is considered idle if it has no outstanding reads or
4868          # SQL queries and has not started a read or SQL query within the last 10
4869          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
4870          # don't hold on to locks indefinitely. In that case, the commit will
4871          # fail with error `ABORTED`.
4872          #
4873          # If this behavior is undesirable, periodically executing a simple
4874          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
4875          # transaction from becoming idle.
4876          #
4877          # ## Snapshot Read-Only Transactions
4878          #
4879          # Snapshot read-only transactions provides a simpler method than
4880          # locking read-write transactions for doing several consistent
4881          # reads. However, this type of transaction does not support writes.
4882          #
4883          # Snapshot transactions do not take locks. Instead, they work by
4884          # choosing a Cloud Spanner timestamp, then executing all reads at that
4885          # timestamp. Since they do not acquire locks, they do not block
4886          # concurrent read-write transactions.
4887          #
4888          # Unlike locking read-write transactions, snapshot read-only
4889          # transactions never abort. They can fail if the chosen read
4890          # timestamp is garbage collected; however, the default garbage
4891          # collection policy is generous enough that most applications do not
4892          # need to worry about this in practice.
4893          #
4894          # Snapshot read-only transactions do not need to call
4895          # Commit or
4896          # Rollback (and in fact are not
4897          # permitted to do so).
4898          #
4899          # To execute a snapshot transaction, the client specifies a timestamp
4900          # bound, which tells Cloud Spanner how to choose a read timestamp.
4901          #
4902          # The types of timestamp bound are:
4903          #
4904          #   - Strong (the default).
4905          #   - Bounded staleness.
4906          #   - Exact staleness.
4907          #
4908          # If the Cloud Spanner database to be read is geographically distributed,
4909          # stale read-only transactions can execute more quickly than strong
4910          # or read-write transaction, because they are able to execute far
4911          # from the leader replica.
4912          #
4913          # Each type of timestamp bound is discussed in detail below.
4914          #
4915          # ### Strong
4916          #
4917          # Strong reads are guaranteed to see the effects of all transactions
4918          # that have committed before the start of the read. Furthermore, all
4919          # rows yielded by a single read are consistent with each other -- if
4920          # any part of the read observes a transaction, all parts of the read
4921          # see the transaction.
4922          #
4923          # Strong reads are not repeatable: two consecutive strong read-only
4924          # transactions might return inconsistent results if there are
4925          # concurrent writes. If consistency across reads is required, the
4926          # reads should be executed within a transaction or at an exact read
4927          # timestamp.
4928          #
4929          # See TransactionOptions.ReadOnly.strong.
4930          #
4931          # ### Exact Staleness
4932          #
4933          # These timestamp bounds execute reads at a user-specified
4934          # timestamp. Reads at a timestamp are guaranteed to see a consistent
4935          # prefix of the global transaction history: they observe
4936          # modifications done by all transactions with a commit timestamp <=
4937          # the read timestamp, and observe none of the modifications done by
4938          # transactions with a larger commit timestamp. They will block until
4939          # all conflicting transactions that may be assigned commit timestamps
4940          # <= the read timestamp have finished.
4941          #
4942          # The timestamp can either be expressed as an absolute Cloud Spanner commit
4943          # timestamp or a staleness relative to the current time.
4944          #
4945          # These modes do not require a "negotiation phase" to pick a
4946          # timestamp. As a result, they execute slightly faster than the
4947          # equivalent boundedly stale concurrency modes. On the other hand,
4948          # boundedly stale reads usually return fresher results.
4949          #
4950          # See TransactionOptions.ReadOnly.read_timestamp and
4951          # TransactionOptions.ReadOnly.exact_staleness.
4952          #
4953          # ### Bounded Staleness
4954          #
4955          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
4956          # subject to a user-provided staleness bound. Cloud Spanner chooses the
4957          # newest timestamp within the staleness bound that allows execution
4958          # of the reads at the closest available replica without blocking.
4959          #
4960          # All rows yielded are consistent with each other -- if any part of
4961          # the read observes a transaction, all parts of the read see the
4962          # transaction. Boundedly stale reads are not repeatable: two stale
4963          # reads, even if they use the same staleness bound, can execute at
4964          # different timestamps and thus return inconsistent results.
4965          #
4966          # Boundedly stale reads execute in two phases: the first phase
4967          # negotiates a timestamp among all replicas needed to serve the
4968          # read. In the second phase, reads are executed at the negotiated
4969          # timestamp.
4970          #
4971          # As a result of the two phase execution, bounded staleness reads are
4972          # usually a little slower than comparable exact staleness
4973          # reads. However, they are typically able to return fresher
4974          # results, and are more likely to execute at the closest replica.
4975          #
4976          # Because the timestamp negotiation requires up-front knowledge of
4977          # which rows will be read, it can only be used with single-use
4978          # read-only transactions.
4979          #
4980          # See TransactionOptions.ReadOnly.max_staleness and
4981          # TransactionOptions.ReadOnly.min_read_timestamp.
4982          #
4983          # ### Old Read Timestamps and Garbage Collection
4984          #
4985          # Cloud Spanner continuously garbage collects deleted and overwritten data
4986          # in the background to reclaim storage space. This process is known
4987          # as "version GC". By default, version GC reclaims versions after they
4988          # are one hour old. Because of this, Cloud Spanner cannot perform reads
4989          # at read timestamps more than one hour in the past. This
4990          # restriction also applies to in-progress reads and/or SQL queries whose
4991          # timestamp become too old while executing. Reads and SQL queries with
4992          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
4993          #
4994          # ## Partitioned DML Transactions
4995          #
4996          # Partitioned DML transactions are used to execute DML statements with a
4997          # different execution strategy that provides different, and often better,
4998          # scalability properties for large, table-wide operations than DML in a
4999          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
5000          # should prefer using ReadWrite transactions.
5001          #
5002          # Partitioned DML partitions the keyspace and runs the DML statement on each
5003          # partition in separate, internal transactions. These transactions commit
5004          # automatically when complete, and run independently from one another.
5005          #
5006          # To reduce lock contention, this execution strategy only acquires read locks
5007          # on rows that match the WHERE clause of the statement. Additionally, the
5008          # smaller per-partition transactions hold locks for less time.
5009          #
5010          # That said, Partitioned DML is not a drop-in replacement for standard DML used
5011          # in ReadWrite transactions.
5012          #
5013          #  - The DML statement must be fully-partitionable. Specifically, the statement
5014          #    must be expressible as the union of many statements which each access only
5015          #    a single row of the table.
5016          #
5017          #  - The statement is not applied atomically to all rows of the table. Rather,
5018          #    the statement is applied atomically to partitions of the table, in
5019          #    independent transactions. Secondary index rows are updated atomically
5020          #    with the base table rows.
5021          #
5022          #  - Partitioned DML does not guarantee exactly-once execution semantics
5023          #    against a partition. The statement will be applied at least once to each
5024          #    partition. It is strongly recommended that the DML statement should be
5025          #    idempotent to avoid unexpected results. For instance, it is potentially
5026          #    dangerous to run a statement such as
5027          #    `UPDATE table SET column = column + 1` as it could be run multiple times
5028          #    against some rows.
5029          #
5030          #  - The partitions are committed automatically - there is no support for
5031          #    Commit or Rollback. If the call returns an error, or if the client issuing
5032          #    the ExecuteSql call dies, it is possible that some rows had the statement
5033          #    executed on them successfully. It is also possible that statement was
5034          #    never executed against other rows.
5035          #
5036          #  - Partitioned DML transactions may only contain the execution of a single
5037          #    DML statement via ExecuteSql or ExecuteStreamingSql.
5038          #
5039          #  - If any error is encountered during the execution of the partitioned DML
5040          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
5041          #    value that cannot be stored due to schema constraints), then the
5042          #    operation is stopped at that point and an error is returned. It is
5043          #    possible that at this point, some partitions have been committed (or even
5044          #    committed multiple times), and other partitions have not been run at all.
5045          #
5046          # Given the above, Partitioned DML is good fit for large, database-wide,
5047          # operations that are idempotent, such as deleting old rows from a very large
5048          # table.
5049        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
5050            #
5051            # Authorization to begin a read-write transaction requires
5052            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
5053            # on the `session` resource.
5054            # transaction type has no options.
5055        },
5056        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
5057            #
5058            # Authorization to begin a read-only transaction requires
5059            # `spanner.databases.beginReadOnlyTransaction` permission
5060            # on the `session` resource.
5061          "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
5062              #
5063              # This is useful for requesting fresher data than some previous
5064              # read, or data that is fresh enough to observe the effects of some
5065              # previously committed transaction whose timestamp is known.
5066              #
5067              # Note that this option can only be used in single-use transactions.
5068              #
5069              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
5070              # Example: `"2014-10-02T15:01:23.045123456Z"`.
5071          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
5072              # the Transaction message that describes the transaction.
5073          "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
5074              # seconds. Guarantees that all writes that have committed more
5075              # than the specified number of seconds ago are visible. Because
5076              # Cloud Spanner chooses the exact timestamp, this mode works even if
5077              # the client's local clock is substantially skewed from Cloud Spanner
5078              # commit timestamps.
5079              #
5080              # Useful for reading the freshest data available at a nearby
5081              # replica, while bounding the possible staleness if the local
5082              # replica has fallen behind.
5083              #
5084              # Note that this option can only be used in single-use
5085              # transactions.
5086          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
5087              # old. The timestamp is chosen soon after the read is started.
5088              #
5089              # Guarantees that all writes that have committed more than the
5090              # specified number of seconds ago are visible. Because Cloud Spanner
5091              # chooses the exact timestamp, this mode works even if the client's
5092              # local clock is substantially skewed from Cloud Spanner commit
5093              # timestamps.
5094              #
5095              # Useful for reading at nearby replicas without the distributed
5096              # timestamp negotiation overhead of `max_staleness`.
5097          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
5098              # reads at a specific timestamp are repeatable; the same read at
5099              # the same timestamp always returns the same data. If the
5100              # timestamp is in the future, the read will block until the
5101              # specified timestamp, modulo the read's deadline.
5102              #
5103              # Useful for large scale consistent reads such as mapreduces, or
5104              # for coordinating many reads against a consistent snapshot of the
5105              # data.
5106              #
5107              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
5108              # Example: `"2014-10-02T15:01:23.045123456Z"`.
5109          "strong": True or False, # Read at a timestamp where all previously committed transactions
5110              # are visible.
5111        },
5112        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
5113            #
5114            # Authorization to begin a Partitioned DML transaction requires
5115            # `spanner.databases.beginPartitionedDmlTransaction` permission
5116            # on the `session` resource.
5117        },
5118      },
5119      "id": "A String", # Execute the read or SQL query in a previously-started transaction.
5120    },
5121    "params": { # The SQL query string can contain parameter placeholders. A parameter
5122        # placeholder consists of `'@'` followed by the parameter
5123        # name. Parameter names consist of any combination of letters,
5124        # numbers, and underscores.
5125        #
5126        # Parameters can appear anywhere that a literal value is expected.  The same
5127        # parameter name can be used more than once, for example:
5128        #   `"WHERE id > @msg_id AND id < @msg_id + 100"`
5129        #
5130        # It is an error to execute an SQL query with unbound parameters.
5131        #
5132        # Parameter values are specified using `params`, which is a JSON
5133        # object whose keys are parameter names, and whose values are the
5134        # corresponding parameter values.
5135      "a_key": "", # Properties of the object.
5136    },
5137    "sql": "A String", # The query request to generate partitions for. The request will fail if
5138        # the query is not root partitionable. The query plan of a root
5139        # partitionable query has a single distributed union operator. A distributed
5140        # union operator conceptually divides one or more tables into multiple
5141        # splits, remotely evaluates a subquery independently on each split, and
5142        # then unions all results.
5143        #
5144        # This must not contain DML commands, such as INSERT, UPDATE, or
5145        # DELETE. Use ExecuteStreamingSql with a
5146        # PartitionedDml transaction for large, partition-friendly DML operations.
5147  }
5148
5149  x__xgafv: string, V1 error format.
5150    Allowed values
5151      1 - v1 error format
5152      2 - v2 error format
5153
5154Returns:
5155  An object of the form:
5156
5157    { # The response for PartitionQuery
5158      # or PartitionRead
5159    "transaction": { # A transaction. # Transaction created by this request.
5160      "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
5161          # for the transaction. Not returned by default: see
5162          # TransactionOptions.ReadOnly.return_read_timestamp.
5163          #
5164          # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
5165          # Example: `"2014-10-02T15:01:23.045123456Z"`.
5166      "id": "A String", # `id` may be used to identify the transaction in subsequent
5167          # Read,
5168          # ExecuteSql,
5169          # Commit, or
5170          # Rollback calls.
5171          #
5172          # Single-use read-only transactions do not have IDs, because
5173          # single-use transactions do not support multiple requests.
5174    },
5175    "partitions": [ # Partitions created by this request.
5176      { # Information returned for each partition returned in a
5177          # PartitionResponse.
5178        "partitionToken": "A String", # This token can be passed to Read, StreamingRead, ExecuteSql, or
5179            # ExecuteStreamingSql requests to restrict the results to those identified by
5180            # this partition token.
5181      },
5182    ],
5183  }</pre>
5184</div>
5185
5186<div class="method">
5187    <code class="details" id="partitionRead">partitionRead(session, body, x__xgafv=None)</code>
5188  <pre>Creates a set of partition tokens that can be used to execute a read
5189operation in parallel.  Each of the returned partition tokens can be used
5190by StreamingRead to specify a subset of the read
5191result to read.  The same session and read-only transaction must be used by
5192the PartitionReadRequest used to create the partition tokens and the
5193ReadRequests that use the partition tokens.  There are no ordering
5194guarantees on rows returned among the returned partition tokens, or even
5195within each individual StreamingRead call issued with a partition_token.
5196
5197Partition tokens become invalid when the session used to create them
5198is deleted, is idle for too long, begins a new transaction, or becomes too
5199old.  When any of these happen, it is not possible to resume the read, and
5200the whole operation must be restarted from the beginning.
5201
5202Args:
5203  session: string, Required. The session used to create the partitions. (required)
5204  body: object, The request body. (required)
5205    The object takes the form of:
5206
5207{ # The request for PartitionRead
5208    "index": "A String", # If non-empty, the name of an index on table. This index is
5209        # used instead of the table primary key when interpreting key_set
5210        # and sorting result rows. See key_set for further information.
5211    "transaction": { # This message is used to select the transaction in which a # Read only snapshot transactions are supported, read/write and single use
5212        # transactions are not.
5213        # Read or
5214        # ExecuteSql call runs.
5215        #
5216        # See TransactionOptions for more information about transactions.
5217      "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
5218          # it. The transaction ID of the new transaction is returned in
5219          # ResultSetMetadata.transaction, which is a Transaction.
5220          #
5221          #
5222          # Each session can have at most one active transaction at a time. After the
5223          # active transaction is completed, the session can immediately be
5224          # re-used for the next transaction. It is not necessary to create a
5225          # new session for each transaction.
5226          #
5227          # # Transaction Modes
5228          #
5229          # Cloud Spanner supports three transaction modes:
5230          #
5231          #   1. Locking read-write. This type of transaction is the only way
5232          #      to write data into Cloud Spanner. These transactions rely on
5233          #      pessimistic locking and, if necessary, two-phase commit.
5234          #      Locking read-write transactions may abort, requiring the
5235          #      application to retry.
5236          #
5237          #   2. Snapshot read-only. This transaction type provides guaranteed
5238          #      consistency across several reads, but does not allow
5239          #      writes. Snapshot read-only transactions can be configured to
5240          #      read at timestamps in the past. Snapshot read-only
5241          #      transactions do not need to be committed.
5242          #
5243          #   3. Partitioned DML. This type of transaction is used to execute
5244          #      a single Partitioned DML statement. Partitioned DML partitions
5245          #      the key space and runs the DML statement over each partition
5246          #      in parallel using separate, internal transactions that commit
5247          #      independently. Partitioned DML transactions do not need to be
5248          #      committed.
5249          #
5250          # For transactions that only read, snapshot read-only transactions
5251          # provide simpler semantics and are almost always faster. In
5252          # particular, read-only transactions do not take locks, so they do
5253          # not conflict with read-write transactions. As a consequence of not
5254          # taking locks, they also do not abort, so retry loops are not needed.
5255          #
5256          # Transactions may only read/write data in a single database. They
5257          # may, however, read/write data in different tables within that
5258          # database.
5259          #
5260          # ## Locking Read-Write Transactions
5261          #
5262          # Locking transactions may be used to atomically read-modify-write
5263          # data anywhere in a database. This type of transaction is externally
5264          # consistent.
5265          #
5266          # Clients should attempt to minimize the amount of time a transaction
5267          # is active. Faster transactions commit with higher probability
5268          # and cause less contention. Cloud Spanner attempts to keep read locks
5269          # active as long as the transaction continues to do reads, and the
5270          # transaction has not been terminated by
5271          # Commit or
5272          # Rollback.  Long periods of
5273          # inactivity at the client may cause Cloud Spanner to release a
5274          # transaction's locks and abort it.
5275          #
5276          # Conceptually, a read-write transaction consists of zero or more
5277          # reads or SQL statements followed by
5278          # Commit. At any time before
5279          # Commit, the client can send a
5280          # Rollback request to abort the
5281          # transaction.
5282          #
5283          # ### Semantics
5284          #
5285          # Cloud Spanner can commit the transaction if all read locks it acquired
5286          # are still valid at commit time, and it is able to acquire write
5287          # locks for all writes. Cloud Spanner can abort the transaction for any
5288          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
5289          # that the transaction has not modified any user data in Cloud Spanner.
5290          #
5291          # Unless the transaction commits, Cloud Spanner makes no guarantees about
5292          # how long the transaction's locks were held for. It is an error to
5293          # use Cloud Spanner locks for any sort of mutual exclusion other than
5294          # between Cloud Spanner transactions themselves.
5295          #
5296          # ### Retrying Aborted Transactions
5297          #
5298          # When a transaction aborts, the application can choose to retry the
5299          # whole transaction again. To maximize the chances of successfully
5300          # committing the retry, the client should execute the retry in the
5301          # same session as the original attempt. The original session's lock
5302          # priority increases with each consecutive abort, meaning that each
5303          # attempt has a slightly better chance of success than the previous.
5304          #
5305          # Under some circumstances (e.g., many transactions attempting to
5306          # modify the same row(s)), a transaction can abort many times in a
5307          # short period before successfully committing. Thus, it is not a good
5308          # idea to cap the number of retries a transaction can attempt;
5309          # instead, it is better to limit the total amount of wall time spent
5310          # retrying.
5311          #
5312          # ### Idle Transactions
5313          #
5314          # A transaction is considered idle if it has no outstanding reads or
5315          # SQL queries and has not started a read or SQL query within the last 10
5316          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
5317          # don't hold on to locks indefinitely. In that case, the commit will
5318          # fail with error `ABORTED`.
5319          #
5320          # If this behavior is undesirable, periodically executing a simple
5321          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
5322          # transaction from becoming idle.
5323          #
5324          # ## Snapshot Read-Only Transactions
5325          #
5326          # Snapshot read-only transactions provides a simpler method than
5327          # locking read-write transactions for doing several consistent
5328          # reads. However, this type of transaction does not support writes.
5329          #
5330          # Snapshot transactions do not take locks. Instead, they work by
5331          # choosing a Cloud Spanner timestamp, then executing all reads at that
5332          # timestamp. Since they do not acquire locks, they do not block
5333          # concurrent read-write transactions.
5334          #
5335          # Unlike locking read-write transactions, snapshot read-only
5336          # transactions never abort. They can fail if the chosen read
5337          # timestamp is garbage collected; however, the default garbage
5338          # collection policy is generous enough that most applications do not
5339          # need to worry about this in practice.
5340          #
5341          # Snapshot read-only transactions do not need to call
5342          # Commit or
5343          # Rollback (and in fact are not
5344          # permitted to do so).
5345          #
5346          # To execute a snapshot transaction, the client specifies a timestamp
5347          # bound, which tells Cloud Spanner how to choose a read timestamp.
5348          #
5349          # The types of timestamp bound are:
5350          #
5351          #   - Strong (the default).
5352          #   - Bounded staleness.
5353          #   - Exact staleness.
5354          #
5355          # If the Cloud Spanner database to be read is geographically distributed,
5356          # stale read-only transactions can execute more quickly than strong
5357          # or read-write transaction, because they are able to execute far
5358          # from the leader replica.
5359          #
5360          # Each type of timestamp bound is discussed in detail below.
5361          #
5362          # ### Strong
5363          #
5364          # Strong reads are guaranteed to see the effects of all transactions
5365          # that have committed before the start of the read. Furthermore, all
5366          # rows yielded by a single read are consistent with each other -- if
5367          # any part of the read observes a transaction, all parts of the read
5368          # see the transaction.
5369          #
5370          # Strong reads are not repeatable: two consecutive strong read-only
5371          # transactions might return inconsistent results if there are
5372          # concurrent writes. If consistency across reads is required, the
5373          # reads should be executed within a transaction or at an exact read
5374          # timestamp.
5375          #
5376          # See TransactionOptions.ReadOnly.strong.
5377          #
5378          # ### Exact Staleness
5379          #
5380          # These timestamp bounds execute reads at a user-specified
5381          # timestamp. Reads at a timestamp are guaranteed to see a consistent
5382          # prefix of the global transaction history: they observe
5383          # modifications done by all transactions with a commit timestamp <=
5384          # the read timestamp, and observe none of the modifications done by
5385          # transactions with a larger commit timestamp. They will block until
5386          # all conflicting transactions that may be assigned commit timestamps
5387          # <= the read timestamp have finished.
5388          #
5389          # The timestamp can either be expressed as an absolute Cloud Spanner commit
5390          # timestamp or a staleness relative to the current time.
5391          #
5392          # These modes do not require a "negotiation phase" to pick a
5393          # timestamp. As a result, they execute slightly faster than the
5394          # equivalent boundedly stale concurrency modes. On the other hand,
5395          # boundedly stale reads usually return fresher results.
5396          #
5397          # See TransactionOptions.ReadOnly.read_timestamp and
5398          # TransactionOptions.ReadOnly.exact_staleness.
5399          #
5400          # ### Bounded Staleness
5401          #
5402          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
5403          # subject to a user-provided staleness bound. Cloud Spanner chooses the
5404          # newest timestamp within the staleness bound that allows execution
5405          # of the reads at the closest available replica without blocking.
5406          #
5407          # All rows yielded are consistent with each other -- if any part of
5408          # the read observes a transaction, all parts of the read see the
5409          # transaction. Boundedly stale reads are not repeatable: two stale
5410          # reads, even if they use the same staleness bound, can execute at
5411          # different timestamps and thus return inconsistent results.
5412          #
5413          # Boundedly stale reads execute in two phases: the first phase
5414          # negotiates a timestamp among all replicas needed to serve the
5415          # read. In the second phase, reads are executed at the negotiated
5416          # timestamp.
5417          #
5418          # As a result of the two phase execution, bounded staleness reads are
5419          # usually a little slower than comparable exact staleness
5420          # reads. However, they are typically able to return fresher
5421          # results, and are more likely to execute at the closest replica.
5422          #
5423          # Because the timestamp negotiation requires up-front knowledge of
5424          # which rows will be read, it can only be used with single-use
5425          # read-only transactions.
5426          #
5427          # See TransactionOptions.ReadOnly.max_staleness and
5428          # TransactionOptions.ReadOnly.min_read_timestamp.
5429          #
5430          # ### Old Read Timestamps and Garbage Collection
5431          #
5432          # Cloud Spanner continuously garbage collects deleted and overwritten data
5433          # in the background to reclaim storage space. This process is known
5434          # as "version GC". By default, version GC reclaims versions after they
5435          # are one hour old. Because of this, Cloud Spanner cannot perform reads
5436          # at read timestamps more than one hour in the past. This
5437          # restriction also applies to in-progress reads and/or SQL queries whose
5438          # timestamp become too old while executing. Reads and SQL queries with
5439          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
5440          #
5441          # ## Partitioned DML Transactions
5442          #
5443          # Partitioned DML transactions are used to execute DML statements with a
5444          # different execution strategy that provides different, and often better,
5445          # scalability properties for large, table-wide operations than DML in a
5446          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
5447          # should prefer using ReadWrite transactions.
5448          #
5449          # Partitioned DML partitions the keyspace and runs the DML statement on each
5450          # partition in separate, internal transactions. These transactions commit
5451          # automatically when complete, and run independently from one another.
5452          #
5453          # To reduce lock contention, this execution strategy only acquires read locks
5454          # on rows that match the WHERE clause of the statement. Additionally, the
5455          # smaller per-partition transactions hold locks for less time.
5456          #
5457          # That said, Partitioned DML is not a drop-in replacement for standard DML used
5458          # in ReadWrite transactions.
5459          #
5460          #  - The DML statement must be fully-partitionable. Specifically, the statement
5461          #    must be expressible as the union of many statements which each access only
5462          #    a single row of the table.
5463          #
5464          #  - The statement is not applied atomically to all rows of the table. Rather,
5465          #    the statement is applied atomically to partitions of the table, in
5466          #    independent transactions. Secondary index rows are updated atomically
5467          #    with the base table rows.
5468          #
5469          #  - Partitioned DML does not guarantee exactly-once execution semantics
5470          #    against a partition. The statement will be applied at least once to each
5471          #    partition. It is strongly recommended that the DML statement should be
5472          #    idempotent to avoid unexpected results. For instance, it is potentially
5473          #    dangerous to run a statement such as
5474          #    `UPDATE table SET column = column + 1` as it could be run multiple times
5475          #    against some rows.
5476          #
5477          #  - The partitions are committed automatically - there is no support for
5478          #    Commit or Rollback. If the call returns an error, or if the client issuing
5479          #    the ExecuteSql call dies, it is possible that some rows had the statement
5480          #    executed on them successfully. It is also possible that statement was
5481          #    never executed against other rows.
5482          #
5483          #  - Partitioned DML transactions may only contain the execution of a single
5484          #    DML statement via ExecuteSql or ExecuteStreamingSql.
5485          #
5486          #  - If any error is encountered during the execution of the partitioned DML
5487          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
5488          #    value that cannot be stored due to schema constraints), then the
5489          #    operation is stopped at that point and an error is returned. It is
5490          #    possible that at this point, some partitions have been committed (or even
5491          #    committed multiple times), and other partitions have not been run at all.
5492          #
5493          # Given the above, Partitioned DML is good fit for large, database-wide,
5494          # operations that are idempotent, such as deleting old rows from a very large
5495          # table.
5496        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
5497            #
5498            # Authorization to begin a read-write transaction requires
5499            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
5500            # on the `session` resource.
5501            # transaction type has no options.
5502        },
5503        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
5504            #
5505            # Authorization to begin a read-only transaction requires
5506            # `spanner.databases.beginReadOnlyTransaction` permission
5507            # on the `session` resource.
5508          "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
5509              #
5510              # This is useful for requesting fresher data than some previous
5511              # read, or data that is fresh enough to observe the effects of some
5512              # previously committed transaction whose timestamp is known.
5513              #
5514              # Note that this option can only be used in single-use transactions.
5515              #
5516              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
5517              # Example: `"2014-10-02T15:01:23.045123456Z"`.
5518          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
5519              # the Transaction message that describes the transaction.
5520          "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
5521              # seconds. Guarantees that all writes that have committed more
5522              # than the specified number of seconds ago are visible. Because
5523              # Cloud Spanner chooses the exact timestamp, this mode works even if
5524              # the client's local clock is substantially skewed from Cloud Spanner
5525              # commit timestamps.
5526              #
5527              # Useful for reading the freshest data available at a nearby
5528              # replica, while bounding the possible staleness if the local
5529              # replica has fallen behind.
5530              #
5531              # Note that this option can only be used in single-use
5532              # transactions.
5533          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
5534              # old. The timestamp is chosen soon after the read is started.
5535              #
5536              # Guarantees that all writes that have committed more than the
5537              # specified number of seconds ago are visible. Because Cloud Spanner
5538              # chooses the exact timestamp, this mode works even if the client's
5539              # local clock is substantially skewed from Cloud Spanner commit
5540              # timestamps.
5541              #
5542              # Useful for reading at nearby replicas without the distributed
5543              # timestamp negotiation overhead of `max_staleness`.
5544          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
5545              # reads at a specific timestamp are repeatable; the same read at
5546              # the same timestamp always returns the same data. If the
5547              # timestamp is in the future, the read will block until the
5548              # specified timestamp, modulo the read's deadline.
5549              #
5550              # Useful for large scale consistent reads such as mapreduces, or
5551              # for coordinating many reads against a consistent snapshot of the
5552              # data.
5553              #
5554              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
5555              # Example: `"2014-10-02T15:01:23.045123456Z"`.
5556          "strong": True or False, # Read at a timestamp where all previously committed transactions
5557              # are visible.
5558        },
5559        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
5560            #
5561            # Authorization to begin a Partitioned DML transaction requires
5562            # `spanner.databases.beginPartitionedDmlTransaction` permission
5563            # on the `session` resource.
5564        },
5565      },
5566      "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
5567          # This is the most efficient way to execute a transaction that
5568          # consists of a single SQL query.
5569          #
5570          #
5571          # Each session can have at most one active transaction at a time. After the
5572          # active transaction is completed, the session can immediately be
5573          # re-used for the next transaction. It is not necessary to create a
5574          # new session for each transaction.
5575          #
5576          # # Transaction Modes
5577          #
5578          # Cloud Spanner supports three transaction modes:
5579          #
5580          #   1. Locking read-write. This type of transaction is the only way
5581          #      to write data into Cloud Spanner. These transactions rely on
5582          #      pessimistic locking and, if necessary, two-phase commit.
5583          #      Locking read-write transactions may abort, requiring the
5584          #      application to retry.
5585          #
5586          #   2. Snapshot read-only. This transaction type provides guaranteed
5587          #      consistency across several reads, but does not allow
5588          #      writes. Snapshot read-only transactions can be configured to
5589          #      read at timestamps in the past. Snapshot read-only
5590          #      transactions do not need to be committed.
5591          #
5592          #   3. Partitioned DML. This type of transaction is used to execute
5593          #      a single Partitioned DML statement. Partitioned DML partitions
5594          #      the key space and runs the DML statement over each partition
5595          #      in parallel using separate, internal transactions that commit
5596          #      independently. Partitioned DML transactions do not need to be
5597          #      committed.
5598          #
5599          # For transactions that only read, snapshot read-only transactions
5600          # provide simpler semantics and are almost always faster. In
5601          # particular, read-only transactions do not take locks, so they do
5602          # not conflict with read-write transactions. As a consequence of not
5603          # taking locks, they also do not abort, so retry loops are not needed.
5604          #
5605          # Transactions may only read/write data in a single database. They
5606          # may, however, read/write data in different tables within that
5607          # database.
5608          #
5609          # ## Locking Read-Write Transactions
5610          #
5611          # Locking transactions may be used to atomically read-modify-write
5612          # data anywhere in a database. This type of transaction is externally
5613          # consistent.
5614          #
5615          # Clients should attempt to minimize the amount of time a transaction
5616          # is active. Faster transactions commit with higher probability
5617          # and cause less contention. Cloud Spanner attempts to keep read locks
5618          # active as long as the transaction continues to do reads, and the
5619          # transaction has not been terminated by
5620          # Commit or
5621          # Rollback.  Long periods of
5622          # inactivity at the client may cause Cloud Spanner to release a
5623          # transaction's locks and abort it.
5624          #
5625          # Conceptually, a read-write transaction consists of zero or more
5626          # reads or SQL statements followed by
5627          # Commit. At any time before
5628          # Commit, the client can send a
5629          # Rollback request to abort the
5630          # transaction.
5631          #
5632          # ### Semantics
5633          #
5634          # Cloud Spanner can commit the transaction if all read locks it acquired
5635          # are still valid at commit time, and it is able to acquire write
5636          # locks for all writes. Cloud Spanner can abort the transaction for any
5637          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
5638          # that the transaction has not modified any user data in Cloud Spanner.
5639          #
5640          # Unless the transaction commits, Cloud Spanner makes no guarantees about
5641          # how long the transaction's locks were held for. It is an error to
5642          # use Cloud Spanner locks for any sort of mutual exclusion other than
5643          # between Cloud Spanner transactions themselves.
5644          #
5645          # ### Retrying Aborted Transactions
5646          #
5647          # When a transaction aborts, the application can choose to retry the
5648          # whole transaction again. To maximize the chances of successfully
5649          # committing the retry, the client should execute the retry in the
5650          # same session as the original attempt. The original session's lock
5651          # priority increases with each consecutive abort, meaning that each
5652          # attempt has a slightly better chance of success than the previous.
5653          #
5654          # Under some circumstances (e.g., many transactions attempting to
5655          # modify the same row(s)), a transaction can abort many times in a
5656          # short period before successfully committing. Thus, it is not a good
5657          # idea to cap the number of retries a transaction can attempt;
5658          # instead, it is better to limit the total amount of wall time spent
5659          # retrying.
5660          #
5661          # ### Idle Transactions
5662          #
5663          # A transaction is considered idle if it has no outstanding reads or
5664          # SQL queries and has not started a read or SQL query within the last 10
5665          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
5666          # don't hold on to locks indefinitely. In that case, the commit will
5667          # fail with error `ABORTED`.
5668          #
5669          # If this behavior is undesirable, periodically executing a simple
5670          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
5671          # transaction from becoming idle.
5672          #
5673          # ## Snapshot Read-Only Transactions
5674          #
5675          # Snapshot read-only transactions provides a simpler method than
5676          # locking read-write transactions for doing several consistent
5677          # reads. However, this type of transaction does not support writes.
5678          #
5679          # Snapshot transactions do not take locks. Instead, they work by
5680          # choosing a Cloud Spanner timestamp, then executing all reads at that
5681          # timestamp. Since they do not acquire locks, they do not block
5682          # concurrent read-write transactions.
5683          #
5684          # Unlike locking read-write transactions, snapshot read-only
5685          # transactions never abort. They can fail if the chosen read
5686          # timestamp is garbage collected; however, the default garbage
5687          # collection policy is generous enough that most applications do not
5688          # need to worry about this in practice.
5689          #
5690          # Snapshot read-only transactions do not need to call
5691          # Commit or
5692          # Rollback (and in fact are not
5693          # permitted to do so).
5694          #
5695          # To execute a snapshot transaction, the client specifies a timestamp
5696          # bound, which tells Cloud Spanner how to choose a read timestamp.
5697          #
5698          # The types of timestamp bound are:
5699          #
5700          #   - Strong (the default).
5701          #   - Bounded staleness.
5702          #   - Exact staleness.
5703          #
5704          # If the Cloud Spanner database to be read is geographically distributed,
5705          # stale read-only transactions can execute more quickly than strong
5706          # or read-write transaction, because they are able to execute far
5707          # from the leader replica.
5708          #
5709          # Each type of timestamp bound is discussed in detail below.
5710          #
5711          # ### Strong
5712          #
5713          # Strong reads are guaranteed to see the effects of all transactions
5714          # that have committed before the start of the read. Furthermore, all
5715          # rows yielded by a single read are consistent with each other -- if
5716          # any part of the read observes a transaction, all parts of the read
5717          # see the transaction.
5718          #
5719          # Strong reads are not repeatable: two consecutive strong read-only
5720          # transactions might return inconsistent results if there are
5721          # concurrent writes. If consistency across reads is required, the
5722          # reads should be executed within a transaction or at an exact read
5723          # timestamp.
5724          #
5725          # See TransactionOptions.ReadOnly.strong.
5726          #
5727          # ### Exact Staleness
5728          #
5729          # These timestamp bounds execute reads at a user-specified
5730          # timestamp. Reads at a timestamp are guaranteed to see a consistent
5731          # prefix of the global transaction history: they observe
5732          # modifications done by all transactions with a commit timestamp <=
5733          # the read timestamp, and observe none of the modifications done by
5734          # transactions with a larger commit timestamp. They will block until
5735          # all conflicting transactions that may be assigned commit timestamps
5736          # <= the read timestamp have finished.
5737          #
5738          # The timestamp can either be expressed as an absolute Cloud Spanner commit
5739          # timestamp or a staleness relative to the current time.
5740          #
5741          # These modes do not require a "negotiation phase" to pick a
5742          # timestamp. As a result, they execute slightly faster than the
5743          # equivalent boundedly stale concurrency modes. On the other hand,
5744          # boundedly stale reads usually return fresher results.
5745          #
5746          # See TransactionOptions.ReadOnly.read_timestamp and
5747          # TransactionOptions.ReadOnly.exact_staleness.
5748          #
5749          # ### Bounded Staleness
5750          #
5751          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
5752          # subject to a user-provided staleness bound. Cloud Spanner chooses the
5753          # newest timestamp within the staleness bound that allows execution
5754          # of the reads at the closest available replica without blocking.
5755          #
5756          # All rows yielded are consistent with each other -- if any part of
5757          # the read observes a transaction, all parts of the read see the
5758          # transaction. Boundedly stale reads are not repeatable: two stale
5759          # reads, even if they use the same staleness bound, can execute at
5760          # different timestamps and thus return inconsistent results.
5761          #
5762          # Boundedly stale reads execute in two phases: the first phase
5763          # negotiates a timestamp among all replicas needed to serve the
5764          # read. In the second phase, reads are executed at the negotiated
5765          # timestamp.
5766          #
5767          # As a result of the two phase execution, bounded staleness reads are
5768          # usually a little slower than comparable exact staleness
5769          # reads. However, they are typically able to return fresher
5770          # results, and are more likely to execute at the closest replica.
5771          #
5772          # Because the timestamp negotiation requires up-front knowledge of
5773          # which rows will be read, it can only be used with single-use
5774          # read-only transactions.
5775          #
5776          # See TransactionOptions.ReadOnly.max_staleness and
5777          # TransactionOptions.ReadOnly.min_read_timestamp.
5778          #
5779          # ### Old Read Timestamps and Garbage Collection
5780          #
5781          # Cloud Spanner continuously garbage collects deleted and overwritten data
5782          # in the background to reclaim storage space. This process is known
5783          # as "version GC". By default, version GC reclaims versions after they
5784          # are one hour old. Because of this, Cloud Spanner cannot perform reads
5785          # at read timestamps more than one hour in the past. This
5786          # restriction also applies to in-progress reads and/or SQL queries whose
5787          # timestamp become too old while executing. Reads and SQL queries with
5788          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
5789          #
5790          # ## Partitioned DML Transactions
5791          #
5792          # Partitioned DML transactions are used to execute DML statements with a
5793          # different execution strategy that provides different, and often better,
5794          # scalability properties for large, table-wide operations than DML in a
5795          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
5796          # should prefer using ReadWrite transactions.
5797          #
5798          # Partitioned DML partitions the keyspace and runs the DML statement on each
5799          # partition in separate, internal transactions. These transactions commit
5800          # automatically when complete, and run independently from one another.
5801          #
5802          # To reduce lock contention, this execution strategy only acquires read locks
5803          # on rows that match the WHERE clause of the statement. Additionally, the
5804          # smaller per-partition transactions hold locks for less time.
5805          #
5806          # That said, Partitioned DML is not a drop-in replacement for standard DML used
5807          # in ReadWrite transactions.
5808          #
5809          #  - The DML statement must be fully-partitionable. Specifically, the statement
5810          #    must be expressible as the union of many statements which each access only
5811          #    a single row of the table.
5812          #
5813          #  - The statement is not applied atomically to all rows of the table. Rather,
5814          #    the statement is applied atomically to partitions of the table, in
5815          #    independent transactions. Secondary index rows are updated atomically
5816          #    with the base table rows.
5817          #
5818          #  - Partitioned DML does not guarantee exactly-once execution semantics
5819          #    against a partition. The statement will be applied at least once to each
5820          #    partition. It is strongly recommended that the DML statement should be
5821          #    idempotent to avoid unexpected results. For instance, it is potentially
5822          #    dangerous to run a statement such as
5823          #    `UPDATE table SET column = column + 1` as it could be run multiple times
5824          #    against some rows.
5825          #
5826          #  - The partitions are committed automatically - there is no support for
5827          #    Commit or Rollback. If the call returns an error, or if the client issuing
5828          #    the ExecuteSql call dies, it is possible that some rows had the statement
5829          #    executed on them successfully. It is also possible that statement was
5830          #    never executed against other rows.
5831          #
5832          #  - Partitioned DML transactions may only contain the execution of a single
5833          #    DML statement via ExecuteSql or ExecuteStreamingSql.
5834          #
5835          #  - If any error is encountered during the execution of the partitioned DML
5836          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
5837          #    value that cannot be stored due to schema constraints), then the
5838          #    operation is stopped at that point and an error is returned. It is
5839          #    possible that at this point, some partitions have been committed (or even
5840          #    committed multiple times), and other partitions have not been run at all.
5841          #
5842          # Given the above, Partitioned DML is good fit for large, database-wide,
5843          # operations that are idempotent, such as deleting old rows from a very large
5844          # table.
5845        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
5846            #
5847            # Authorization to begin a read-write transaction requires
5848            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
5849            # on the `session` resource.
5850            # transaction type has no options.
5851        },
5852        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
5853            #
5854            # Authorization to begin a read-only transaction requires
5855            # `spanner.databases.beginReadOnlyTransaction` permission
5856            # on the `session` resource.
5857          "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
5858              #
5859              # This is useful for requesting fresher data than some previous
5860              # read, or data that is fresh enough to observe the effects of some
5861              # previously committed transaction whose timestamp is known.
5862              #
5863              # Note that this option can only be used in single-use transactions.
5864              #
5865              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
5866              # Example: `"2014-10-02T15:01:23.045123456Z"`.
5867          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
5868              # the Transaction message that describes the transaction.
5869          "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
5870              # seconds. Guarantees that all writes that have committed more
5871              # than the specified number of seconds ago are visible. Because
5872              # Cloud Spanner chooses the exact timestamp, this mode works even if
5873              # the client's local clock is substantially skewed from Cloud Spanner
5874              # commit timestamps.
5875              #
5876              # Useful for reading the freshest data available at a nearby
5877              # replica, while bounding the possible staleness if the local
5878              # replica has fallen behind.
5879              #
5880              # Note that this option can only be used in single-use
5881              # transactions.
5882          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
5883              # old. The timestamp is chosen soon after the read is started.
5884              #
5885              # Guarantees that all writes that have committed more than the
5886              # specified number of seconds ago are visible. Because Cloud Spanner
5887              # chooses the exact timestamp, this mode works even if the client's
5888              # local clock is substantially skewed from Cloud Spanner commit
5889              # timestamps.
5890              #
5891              # Useful for reading at nearby replicas without the distributed
5892              # timestamp negotiation overhead of `max_staleness`.
5893          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
5894              # reads at a specific timestamp are repeatable; the same read at
5895              # the same timestamp always returns the same data. If the
5896              # timestamp is in the future, the read will block until the
5897              # specified timestamp, modulo the read's deadline.
5898              #
5899              # Useful for large scale consistent reads such as mapreduces, or
5900              # for coordinating many reads against a consistent snapshot of the
5901              # data.
5902              #
5903              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
5904              # Example: `"2014-10-02T15:01:23.045123456Z"`.
5905          "strong": True or False, # Read at a timestamp where all previously committed transactions
5906              # are visible.
5907        },
5908        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
5909            #
5910            # Authorization to begin a Partitioned DML transaction requires
5911            # `spanner.databases.beginPartitionedDmlTransaction` permission
5912            # on the `session` resource.
5913        },
5914      },
5915      "id": "A String", # Execute the read or SQL query in a previously-started transaction.
5916    },
5917    "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
5918        # primary keys of the rows in table to be yielded, unless index
5919        # is present. If index is present, then key_set instead names
5920        # index keys in index.
5921        #
5922        # It is not an error for the `key_set` to name rows that do not
5923        # exist in the database. Read yields nothing for nonexistent rows.
5924        # the keys are expected to be in the same table or index. The keys need
5925        # not be sorted in any particular way.
5926        #
5927        # If the same key is specified multiple times in the set (for example
5928        # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
5929        # behaves as if the key were only specified once.
5930      "ranges": [ # A list of key ranges. See KeyRange for more information about
5931          # key range specifications.
5932        { # KeyRange represents a range of rows in a table or index.
5933            #
5934            # A range has a start key and an end key. These keys can be open or
5935            # closed, indicating if the range includes rows with that key.
5936            #
5937            # Keys are represented by lists, where the ith value in the list
5938            # corresponds to the ith component of the table or index primary key.
5939            # Individual values are encoded as described
5940            # here.
5941            #
5942            # For example, consider the following table definition:
5943            #
5944            #     CREATE TABLE UserEvents (
5945            #       UserName STRING(MAX),
5946            #       EventDate STRING(10)
5947            #     ) PRIMARY KEY(UserName, EventDate);
5948            #
5949            # The following keys name rows in this table:
5950            #
5951            #     "Bob", "2014-09-23"
5952            #
5953            # Since the `UserEvents` table's `PRIMARY KEY` clause names two
5954            # columns, each `UserEvents` key has two elements; the first is the
5955            # `UserName`, and the second is the `EventDate`.
5956            #
5957            # Key ranges with multiple components are interpreted
5958            # lexicographically by component using the table or index key's declared
5959            # sort order. For example, the following range returns all events for
5960            # user `"Bob"` that occurred in the year 2015:
5961            #
5962            #     "start_closed": ["Bob", "2015-01-01"]
5963            #     "end_closed": ["Bob", "2015-12-31"]
5964            #
5965            # Start and end keys can omit trailing key components. This affects the
5966            # inclusion and exclusion of rows that exactly match the provided key
5967            # components: if the key is closed, then rows that exactly match the
5968            # provided components are included; if the key is open, then rows
5969            # that exactly match are not included.
5970            #
5971            # For example, the following range includes all events for `"Bob"` that
5972            # occurred during and after the year 2000:
5973            #
5974            #     "start_closed": ["Bob", "2000-01-01"]
5975            #     "end_closed": ["Bob"]
5976            #
5977            # The next example retrieves all events for `"Bob"`:
5978            #
5979            #     "start_closed": ["Bob"]
5980            #     "end_closed": ["Bob"]
5981            #
5982            # To retrieve events before the year 2000:
5983            #
5984            #     "start_closed": ["Bob"]
5985            #     "end_open": ["Bob", "2000-01-01"]
5986            #
5987            # The following range includes all rows in the table:
5988            #
5989            #     "start_closed": []
5990            #     "end_closed": []
5991            #
5992            # This range returns all users whose `UserName` begins with any
5993            # character from A to C:
5994            #
5995            #     "start_closed": ["A"]
5996            #     "end_open": ["D"]
5997            #
5998            # This range returns all users whose `UserName` begins with B:
5999            #
6000            #     "start_closed": ["B"]
6001            #     "end_open": ["C"]
6002            #
6003            # Key ranges honor column sort order. For example, suppose a table is
6004            # defined as follows:
6005            #
6006            #     CREATE TABLE DescendingSortedTable {
6007            #       Key INT64,
6008            #       ...
6009            #     ) PRIMARY KEY(Key DESC);
6010            #
6011            # The following range retrieves all rows with key values between 1
6012            # and 100 inclusive:
6013            #
6014            #     "start_closed": ["100"]
6015            #     "end_closed": ["1"]
6016            #
6017            # Note that 100 is passed as the start, and 1 is passed as the end,
6018            # because `Key` is a descending column in the schema.
6019          "endOpen": [ # If the end is open, then the range excludes rows whose first
6020              # `len(end_open)` key columns exactly match `end_open`.
6021            "",
6022          ],
6023          "startOpen": [ # If the start is open, then the range excludes rows whose first
6024              # `len(start_open)` key columns exactly match `start_open`.
6025            "",
6026          ],
6027          "endClosed": [ # If the end is closed, then the range includes all rows whose
6028              # first `len(end_closed)` key columns exactly match `end_closed`.
6029            "",
6030          ],
6031          "startClosed": [ # If the start is closed, then the range includes all rows whose
6032              # first `len(start_closed)` key columns exactly match `start_closed`.
6033            "",
6034          ],
6035        },
6036      ],
6037      "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
6038          # many elements as there are columns in the primary or index key
6039          # with which this `KeySet` is used.  Individual key values are
6040          # encoded as described here.
6041        [
6042          "",
6043        ],
6044      ],
6045      "all": True or False, # For convenience `all` can be set to `true` to indicate that this
6046          # `KeySet` matches all keys in the table or index. Note that any keys
6047          # specified in `keys` or `ranges` are only yielded once.
6048    },
6049    "partitionOptions": { # Options for a PartitionQueryRequest and # Additional options that affect how many partitions are created.
6050        # PartitionReadRequest.
6051      "maxPartitions": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
6052          # PartitionRead requests.
6053          #
6054          # The desired maximum number of partitions to return.  For example, this may
6055          # be set to the number of workers available.  The default for this option
6056          # is currently 10,000. The maximum value is currently 200,000.  This is only
6057          # a hint.  The actual number of partitions returned may be smaller or larger
6058          # than this maximum count request.
6059      "partitionSizeBytes": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
6060          # PartitionRead requests.
6061          #
6062          # The desired data size for each partition generated.  The default for this
6063          # option is currently 1 GiB.  This is only a hint. The actual size of each
6064          # partition may be smaller or larger than this size request.
6065    },
6066    "table": "A String", # Required. The name of the table in the database to be read.
6067    "columns": [ # The columns of table to be returned for each row matching
6068        # this request.
6069      "A String",
6070    ],
6071  }
6072
6073  x__xgafv: string, V1 error format.
6074    Allowed values
6075      1 - v1 error format
6076      2 - v2 error format
6077
6078Returns:
6079  An object of the form:
6080
6081    { # The response for PartitionQuery
6082      # or PartitionRead
6083    "transaction": { # A transaction. # Transaction created by this request.
6084      "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
6085          # for the transaction. Not returned by default: see
6086          # TransactionOptions.ReadOnly.return_read_timestamp.
6087          #
6088          # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
6089          # Example: `"2014-10-02T15:01:23.045123456Z"`.
6090      "id": "A String", # `id` may be used to identify the transaction in subsequent
6091          # Read,
6092          # ExecuteSql,
6093          # Commit, or
6094          # Rollback calls.
6095          #
6096          # Single-use read-only transactions do not have IDs, because
6097          # single-use transactions do not support multiple requests.
6098    },
6099    "partitions": [ # Partitions created by this request.
6100      { # Information returned for each partition returned in a
6101          # PartitionResponse.
6102        "partitionToken": "A String", # This token can be passed to Read, StreamingRead, ExecuteSql, or
6103            # ExecuteStreamingSql requests to restrict the results to those identified by
6104            # this partition token.
6105      },
6106    ],
6107  }</pre>
6108</div>
6109
6110<div class="method">
6111    <code class="details" id="read">read(session, body, x__xgafv=None)</code>
6112  <pre>Reads rows from the database using key lookups and scans, as a
6113simple key/value style alternative to
6114ExecuteSql.  This method cannot be used to
6115return a result set larger than 10 MiB; if the read matches more
6116data than that, the read fails with a `FAILED_PRECONDITION`
6117error.
6118
6119Reads inside read-write transactions might return `ABORTED`. If
6120this occurs, the application should restart the transaction from
6121the beginning. See Transaction for more details.
6122
6123Larger result sets can be yielded in streaming fashion by calling
6124StreamingRead instead.
6125
6126Args:
6127  session: string, Required. The session in which the read should be performed. (required)
6128  body: object, The request body. (required)
6129    The object takes the form of:
6130
6131{ # The request for Read and
6132      # StreamingRead.
6133    "index": "A String", # If non-empty, the name of an index on table. This index is
6134        # used instead of the table primary key when interpreting key_set
6135        # and sorting result rows. See key_set for further information.
6136    "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
6137        # temporary read-only transaction with strong concurrency.
6138        # Read or
6139        # ExecuteSql call runs.
6140        #
6141        # See TransactionOptions for more information about transactions.
6142      "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
6143          # it. The transaction ID of the new transaction is returned in
6144          # ResultSetMetadata.transaction, which is a Transaction.
6145          #
6146          #
6147          # Each session can have at most one active transaction at a time. After the
6148          # active transaction is completed, the session can immediately be
6149          # re-used for the next transaction. It is not necessary to create a
6150          # new session for each transaction.
6151          #
6152          # # Transaction Modes
6153          #
6154          # Cloud Spanner supports three transaction modes:
6155          #
6156          #   1. Locking read-write. This type of transaction is the only way
6157          #      to write data into Cloud Spanner. These transactions rely on
6158          #      pessimistic locking and, if necessary, two-phase commit.
6159          #      Locking read-write transactions may abort, requiring the
6160          #      application to retry.
6161          #
6162          #   2. Snapshot read-only. This transaction type provides guaranteed
6163          #      consistency across several reads, but does not allow
6164          #      writes. Snapshot read-only transactions can be configured to
6165          #      read at timestamps in the past. Snapshot read-only
6166          #      transactions do not need to be committed.
6167          #
6168          #   3. Partitioned DML. This type of transaction is used to execute
6169          #      a single Partitioned DML statement. Partitioned DML partitions
6170          #      the key space and runs the DML statement over each partition
6171          #      in parallel using separate, internal transactions that commit
6172          #      independently. Partitioned DML transactions do not need to be
6173          #      committed.
6174          #
6175          # For transactions that only read, snapshot read-only transactions
6176          # provide simpler semantics and are almost always faster. In
6177          # particular, read-only transactions do not take locks, so they do
6178          # not conflict with read-write transactions. As a consequence of not
6179          # taking locks, they also do not abort, so retry loops are not needed.
6180          #
6181          # Transactions may only read/write data in a single database. They
6182          # may, however, read/write data in different tables within that
6183          # database.
6184          #
6185          # ## Locking Read-Write Transactions
6186          #
6187          # Locking transactions may be used to atomically read-modify-write
6188          # data anywhere in a database. This type of transaction is externally
6189          # consistent.
6190          #
6191          # Clients should attempt to minimize the amount of time a transaction
6192          # is active. Faster transactions commit with higher probability
6193          # and cause less contention. Cloud Spanner attempts to keep read locks
6194          # active as long as the transaction continues to do reads, and the
6195          # transaction has not been terminated by
6196          # Commit or
6197          # Rollback.  Long periods of
6198          # inactivity at the client may cause Cloud Spanner to release a
6199          # transaction's locks and abort it.
6200          #
6201          # Conceptually, a read-write transaction consists of zero or more
6202          # reads or SQL statements followed by
6203          # Commit. At any time before
6204          # Commit, the client can send a
6205          # Rollback request to abort the
6206          # transaction.
6207          #
6208          # ### Semantics
6209          #
6210          # Cloud Spanner can commit the transaction if all read locks it acquired
6211          # are still valid at commit time, and it is able to acquire write
6212          # locks for all writes. Cloud Spanner can abort the transaction for any
6213          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
6214          # that the transaction has not modified any user data in Cloud Spanner.
6215          #
6216          # Unless the transaction commits, Cloud Spanner makes no guarantees about
6217          # how long the transaction's locks were held for. It is an error to
6218          # use Cloud Spanner locks for any sort of mutual exclusion other than
6219          # between Cloud Spanner transactions themselves.
6220          #
6221          # ### Retrying Aborted Transactions
6222          #
6223          # When a transaction aborts, the application can choose to retry the
6224          # whole transaction again. To maximize the chances of successfully
6225          # committing the retry, the client should execute the retry in the
6226          # same session as the original attempt. The original session's lock
6227          # priority increases with each consecutive abort, meaning that each
6228          # attempt has a slightly better chance of success than the previous.
6229          #
6230          # Under some circumstances (e.g., many transactions attempting to
6231          # modify the same row(s)), a transaction can abort many times in a
6232          # short period before successfully committing. Thus, it is not a good
6233          # idea to cap the number of retries a transaction can attempt;
6234          # instead, it is better to limit the total amount of wall time spent
6235          # retrying.
6236          #
6237          # ### Idle Transactions
6238          #
6239          # A transaction is considered idle if it has no outstanding reads or
6240          # SQL queries and has not started a read or SQL query within the last 10
6241          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
6242          # don't hold on to locks indefinitely. In that case, the commit will
6243          # fail with error `ABORTED`.
6244          #
6245          # If this behavior is undesirable, periodically executing a simple
6246          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
6247          # transaction from becoming idle.
6248          #
6249          # ## Snapshot Read-Only Transactions
6250          #
6251          # Snapshot read-only transactions provides a simpler method than
6252          # locking read-write transactions for doing several consistent
6253          # reads. However, this type of transaction does not support writes.
6254          #
6255          # Snapshot transactions do not take locks. Instead, they work by
6256          # choosing a Cloud Spanner timestamp, then executing all reads at that
6257          # timestamp. Since they do not acquire locks, they do not block
6258          # concurrent read-write transactions.
6259          #
6260          # Unlike locking read-write transactions, snapshot read-only
6261          # transactions never abort. They can fail if the chosen read
6262          # timestamp is garbage collected; however, the default garbage
6263          # collection policy is generous enough that most applications do not
6264          # need to worry about this in practice.
6265          #
6266          # Snapshot read-only transactions do not need to call
6267          # Commit or
6268          # Rollback (and in fact are not
6269          # permitted to do so).
6270          #
6271          # To execute a snapshot transaction, the client specifies a timestamp
6272          # bound, which tells Cloud Spanner how to choose a read timestamp.
6273          #
6274          # The types of timestamp bound are:
6275          #
6276          #   - Strong (the default).
6277          #   - Bounded staleness.
6278          #   - Exact staleness.
6279          #
6280          # If the Cloud Spanner database to be read is geographically distributed,
6281          # stale read-only transactions can execute more quickly than strong
6282          # or read-write transaction, because they are able to execute far
6283          # from the leader replica.
6284          #
6285          # Each type of timestamp bound is discussed in detail below.
6286          #
6287          # ### Strong
6288          #
6289          # Strong reads are guaranteed to see the effects of all transactions
6290          # that have committed before the start of the read. Furthermore, all
6291          # rows yielded by a single read are consistent with each other -- if
6292          # any part of the read observes a transaction, all parts of the read
6293          # see the transaction.
6294          #
6295          # Strong reads are not repeatable: two consecutive strong read-only
6296          # transactions might return inconsistent results if there are
6297          # concurrent writes. If consistency across reads is required, the
6298          # reads should be executed within a transaction or at an exact read
6299          # timestamp.
6300          #
6301          # See TransactionOptions.ReadOnly.strong.
6302          #
6303          # ### Exact Staleness
6304          #
6305          # These timestamp bounds execute reads at a user-specified
6306          # timestamp. Reads at a timestamp are guaranteed to see a consistent
6307          # prefix of the global transaction history: they observe
6308          # modifications done by all transactions with a commit timestamp <=
6309          # the read timestamp, and observe none of the modifications done by
6310          # transactions with a larger commit timestamp. They will block until
6311          # all conflicting transactions that may be assigned commit timestamps
6312          # <= the read timestamp have finished.
6313          #
6314          # The timestamp can either be expressed as an absolute Cloud Spanner commit
6315          # timestamp or a staleness relative to the current time.
6316          #
6317          # These modes do not require a "negotiation phase" to pick a
6318          # timestamp. As a result, they execute slightly faster than the
6319          # equivalent boundedly stale concurrency modes. On the other hand,
6320          # boundedly stale reads usually return fresher results.
6321          #
6322          # See TransactionOptions.ReadOnly.read_timestamp and
6323          # TransactionOptions.ReadOnly.exact_staleness.
6324          #
6325          # ### Bounded Staleness
6326          #
6327          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
6328          # subject to a user-provided staleness bound. Cloud Spanner chooses the
6329          # newest timestamp within the staleness bound that allows execution
6330          # of the reads at the closest available replica without blocking.
6331          #
6332          # All rows yielded are consistent with each other -- if any part of
6333          # the read observes a transaction, all parts of the read see the
6334          # transaction. Boundedly stale reads are not repeatable: two stale
6335          # reads, even if they use the same staleness bound, can execute at
6336          # different timestamps and thus return inconsistent results.
6337          #
6338          # Boundedly stale reads execute in two phases: the first phase
6339          # negotiates a timestamp among all replicas needed to serve the
6340          # read. In the second phase, reads are executed at the negotiated
6341          # timestamp.
6342          #
6343          # As a result of the two phase execution, bounded staleness reads are
6344          # usually a little slower than comparable exact staleness
6345          # reads. However, they are typically able to return fresher
6346          # results, and are more likely to execute at the closest replica.
6347          #
6348          # Because the timestamp negotiation requires up-front knowledge of
6349          # which rows will be read, it can only be used with single-use
6350          # read-only transactions.
6351          #
6352          # See TransactionOptions.ReadOnly.max_staleness and
6353          # TransactionOptions.ReadOnly.min_read_timestamp.
6354          #
6355          # ### Old Read Timestamps and Garbage Collection
6356          #
6357          # Cloud Spanner continuously garbage collects deleted and overwritten data
6358          # in the background to reclaim storage space. This process is known
6359          # as "version GC". By default, version GC reclaims versions after they
6360          # are one hour old. Because of this, Cloud Spanner cannot perform reads
6361          # at read timestamps more than one hour in the past. This
6362          # restriction also applies to in-progress reads and/or SQL queries whose
6363          # timestamp become too old while executing. Reads and SQL queries with
6364          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
6365          #
6366          # ## Partitioned DML Transactions
6367          #
6368          # Partitioned DML transactions are used to execute DML statements with a
6369          # different execution strategy that provides different, and often better,
6370          # scalability properties for large, table-wide operations than DML in a
6371          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
6372          # should prefer using ReadWrite transactions.
6373          #
6374          # Partitioned DML partitions the keyspace and runs the DML statement on each
6375          # partition in separate, internal transactions. These transactions commit
6376          # automatically when complete, and run independently from one another.
6377          #
6378          # To reduce lock contention, this execution strategy only acquires read locks
6379          # on rows that match the WHERE clause of the statement. Additionally, the
6380          # smaller per-partition transactions hold locks for less time.
6381          #
6382          # That said, Partitioned DML is not a drop-in replacement for standard DML used
6383          # in ReadWrite transactions.
6384          #
6385          #  - The DML statement must be fully-partitionable. Specifically, the statement
6386          #    must be expressible as the union of many statements which each access only
6387          #    a single row of the table.
6388          #
6389          #  - The statement is not applied atomically to all rows of the table. Rather,
6390          #    the statement is applied atomically to partitions of the table, in
6391          #    independent transactions. Secondary index rows are updated atomically
6392          #    with the base table rows.
6393          #
6394          #  - Partitioned DML does not guarantee exactly-once execution semantics
6395          #    against a partition. The statement will be applied at least once to each
6396          #    partition. It is strongly recommended that the DML statement should be
6397          #    idempotent to avoid unexpected results. For instance, it is potentially
6398          #    dangerous to run a statement such as
6399          #    `UPDATE table SET column = column + 1` as it could be run multiple times
6400          #    against some rows.
6401          #
6402          #  - The partitions are committed automatically - there is no support for
6403          #    Commit or Rollback. If the call returns an error, or if the client issuing
6404          #    the ExecuteSql call dies, it is possible that some rows had the statement
6405          #    executed on them successfully. It is also possible that statement was
6406          #    never executed against other rows.
6407          #
6408          #  - Partitioned DML transactions may only contain the execution of a single
6409          #    DML statement via ExecuteSql or ExecuteStreamingSql.
6410          #
6411          #  - If any error is encountered during the execution of the partitioned DML
6412          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
6413          #    value that cannot be stored due to schema constraints), then the
6414          #    operation is stopped at that point and an error is returned. It is
6415          #    possible that at this point, some partitions have been committed (or even
6416          #    committed multiple times), and other partitions have not been run at all.
6417          #
6418          # Given the above, Partitioned DML is good fit for large, database-wide,
6419          # operations that are idempotent, such as deleting old rows from a very large
6420          # table.
6421        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
6422            #
6423            # Authorization to begin a read-write transaction requires
6424            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
6425            # on the `session` resource.
6426            # transaction type has no options.
6427        },
6428        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
6429            #
6430            # Authorization to begin a read-only transaction requires
6431            # `spanner.databases.beginReadOnlyTransaction` permission
6432            # on the `session` resource.
6433          "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
6434              #
6435              # This is useful for requesting fresher data than some previous
6436              # read, or data that is fresh enough to observe the effects of some
6437              # previously committed transaction whose timestamp is known.
6438              #
6439              # Note that this option can only be used in single-use transactions.
6440              #
6441              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
6442              # Example: `"2014-10-02T15:01:23.045123456Z"`.
6443          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
6444              # the Transaction message that describes the transaction.
6445          "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
6446              # seconds. Guarantees that all writes that have committed more
6447              # than the specified number of seconds ago are visible. Because
6448              # Cloud Spanner chooses the exact timestamp, this mode works even if
6449              # the client's local clock is substantially skewed from Cloud Spanner
6450              # commit timestamps.
6451              #
6452              # Useful for reading the freshest data available at a nearby
6453              # replica, while bounding the possible staleness if the local
6454              # replica has fallen behind.
6455              #
6456              # Note that this option can only be used in single-use
6457              # transactions.
6458          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
6459              # old. The timestamp is chosen soon after the read is started.
6460              #
6461              # Guarantees that all writes that have committed more than the
6462              # specified number of seconds ago are visible. Because Cloud Spanner
6463              # chooses the exact timestamp, this mode works even if the client's
6464              # local clock is substantially skewed from Cloud Spanner commit
6465              # timestamps.
6466              #
6467              # Useful for reading at nearby replicas without the distributed
6468              # timestamp negotiation overhead of `max_staleness`.
6469          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
6470              # reads at a specific timestamp are repeatable; the same read at
6471              # the same timestamp always returns the same data. If the
6472              # timestamp is in the future, the read will block until the
6473              # specified timestamp, modulo the read's deadline.
6474              #
6475              # Useful for large scale consistent reads such as mapreduces, or
6476              # for coordinating many reads against a consistent snapshot of the
6477              # data.
6478              #
6479              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
6480              # Example: `"2014-10-02T15:01:23.045123456Z"`.
6481          "strong": True or False, # Read at a timestamp where all previously committed transactions
6482              # are visible.
6483        },
6484        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
6485            #
6486            # Authorization to begin a Partitioned DML transaction requires
6487            # `spanner.databases.beginPartitionedDmlTransaction` permission
6488            # on the `session` resource.
6489        },
6490      },
6491      "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
6492          # This is the most efficient way to execute a transaction that
6493          # consists of a single SQL query.
6494          #
6495          #
6496          # Each session can have at most one active transaction at a time. After the
6497          # active transaction is completed, the session can immediately be
6498          # re-used for the next transaction. It is not necessary to create a
6499          # new session for each transaction.
6500          #
6501          # # Transaction Modes
6502          #
6503          # Cloud Spanner supports three transaction modes:
6504          #
6505          #   1. Locking read-write. This type of transaction is the only way
6506          #      to write data into Cloud Spanner. These transactions rely on
6507          #      pessimistic locking and, if necessary, two-phase commit.
6508          #      Locking read-write transactions may abort, requiring the
6509          #      application to retry.
6510          #
6511          #   2. Snapshot read-only. This transaction type provides guaranteed
6512          #      consistency across several reads, but does not allow
6513          #      writes. Snapshot read-only transactions can be configured to
6514          #      read at timestamps in the past. Snapshot read-only
6515          #      transactions do not need to be committed.
6516          #
6517          #   3. Partitioned DML. This type of transaction is used to execute
6518          #      a single Partitioned DML statement. Partitioned DML partitions
6519          #      the key space and runs the DML statement over each partition
6520          #      in parallel using separate, internal transactions that commit
6521          #      independently. Partitioned DML transactions do not need to be
6522          #      committed.
6523          #
6524          # For transactions that only read, snapshot read-only transactions
6525          # provide simpler semantics and are almost always faster. In
6526          # particular, read-only transactions do not take locks, so they do
6527          # not conflict with read-write transactions. As a consequence of not
6528          # taking locks, they also do not abort, so retry loops are not needed.
6529          #
6530          # Transactions may only read/write data in a single database. They
6531          # may, however, read/write data in different tables within that
6532          # database.
6533          #
6534          # ## Locking Read-Write Transactions
6535          #
6536          # Locking transactions may be used to atomically read-modify-write
6537          # data anywhere in a database. This type of transaction is externally
6538          # consistent.
6539          #
6540          # Clients should attempt to minimize the amount of time a transaction
6541          # is active. Faster transactions commit with higher probability
6542          # and cause less contention. Cloud Spanner attempts to keep read locks
6543          # active as long as the transaction continues to do reads, and the
6544          # transaction has not been terminated by
6545          # Commit or
6546          # Rollback.  Long periods of
6547          # inactivity at the client may cause Cloud Spanner to release a
6548          # transaction's locks and abort it.
6549          #
6550          # Conceptually, a read-write transaction consists of zero or more
6551          # reads or SQL statements followed by
6552          # Commit. At any time before
6553          # Commit, the client can send a
6554          # Rollback request to abort the
6555          # transaction.
6556          #
6557          # ### Semantics
6558          #
6559          # Cloud Spanner can commit the transaction if all read locks it acquired
6560          # are still valid at commit time, and it is able to acquire write
6561          # locks for all writes. Cloud Spanner can abort the transaction for any
6562          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
6563          # that the transaction has not modified any user data in Cloud Spanner.
6564          #
6565          # Unless the transaction commits, Cloud Spanner makes no guarantees about
6566          # how long the transaction's locks were held for. It is an error to
6567          # use Cloud Spanner locks for any sort of mutual exclusion other than
6568          # between Cloud Spanner transactions themselves.
6569          #
6570          # ### Retrying Aborted Transactions
6571          #
6572          # When a transaction aborts, the application can choose to retry the
6573          # whole transaction again. To maximize the chances of successfully
6574          # committing the retry, the client should execute the retry in the
6575          # same session as the original attempt. The original session's lock
6576          # priority increases with each consecutive abort, meaning that each
6577          # attempt has a slightly better chance of success than the previous.
6578          #
6579          # Under some circumstances (e.g., many transactions attempting to
6580          # modify the same row(s)), a transaction can abort many times in a
6581          # short period before successfully committing. Thus, it is not a good
6582          # idea to cap the number of retries a transaction can attempt;
6583          # instead, it is better to limit the total amount of wall time spent
6584          # retrying.
6585          #
6586          # ### Idle Transactions
6587          #
6588          # A transaction is considered idle if it has no outstanding reads or
6589          # SQL queries and has not started a read or SQL query within the last 10
6590          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
6591          # don't hold on to locks indefinitely. In that case, the commit will
6592          # fail with error `ABORTED`.
6593          #
6594          # If this behavior is undesirable, periodically executing a simple
6595          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
6596          # transaction from becoming idle.
6597          #
6598          # ## Snapshot Read-Only Transactions
6599          #
6600          # Snapshot read-only transactions provides a simpler method than
6601          # locking read-write transactions for doing several consistent
6602          # reads. However, this type of transaction does not support writes.
6603          #
6604          # Snapshot transactions do not take locks. Instead, they work by
6605          # choosing a Cloud Spanner timestamp, then executing all reads at that
6606          # timestamp. Since they do not acquire locks, they do not block
6607          # concurrent read-write transactions.
6608          #
6609          # Unlike locking read-write transactions, snapshot read-only
6610          # transactions never abort. They can fail if the chosen read
6611          # timestamp is garbage collected; however, the default garbage
6612          # collection policy is generous enough that most applications do not
6613          # need to worry about this in practice.
6614          #
6615          # Snapshot read-only transactions do not need to call
6616          # Commit or
6617          # Rollback (and in fact are not
6618          # permitted to do so).
6619          #
6620          # To execute a snapshot transaction, the client specifies a timestamp
6621          # bound, which tells Cloud Spanner how to choose a read timestamp.
6622          #
6623          # The types of timestamp bound are:
6624          #
6625          #   - Strong (the default).
6626          #   - Bounded staleness.
6627          #   - Exact staleness.
6628          #
6629          # If the Cloud Spanner database to be read is geographically distributed,
6630          # stale read-only transactions can execute more quickly than strong
6631          # or read-write transaction, because they are able to execute far
6632          # from the leader replica.
6633          #
6634          # Each type of timestamp bound is discussed in detail below.
6635          #
6636          # ### Strong
6637          #
6638          # Strong reads are guaranteed to see the effects of all transactions
6639          # that have committed before the start of the read. Furthermore, all
6640          # rows yielded by a single read are consistent with each other -- if
6641          # any part of the read observes a transaction, all parts of the read
6642          # see the transaction.
6643          #
6644          # Strong reads are not repeatable: two consecutive strong read-only
6645          # transactions might return inconsistent results if there are
6646          # concurrent writes. If consistency across reads is required, the
6647          # reads should be executed within a transaction or at an exact read
6648          # timestamp.
6649          #
6650          # See TransactionOptions.ReadOnly.strong.
6651          #
6652          # ### Exact Staleness
6653          #
6654          # These timestamp bounds execute reads at a user-specified
6655          # timestamp. Reads at a timestamp are guaranteed to see a consistent
6656          # prefix of the global transaction history: they observe
6657          # modifications done by all transactions with a commit timestamp <=
6658          # the read timestamp, and observe none of the modifications done by
6659          # transactions with a larger commit timestamp. They will block until
6660          # all conflicting transactions that may be assigned commit timestamps
6661          # <= the read timestamp have finished.
6662          #
6663          # The timestamp can either be expressed as an absolute Cloud Spanner commit
6664          # timestamp or a staleness relative to the current time.
6665          #
6666          # These modes do not require a "negotiation phase" to pick a
6667          # timestamp. As a result, they execute slightly faster than the
6668          # equivalent boundedly stale concurrency modes. On the other hand,
6669          # boundedly stale reads usually return fresher results.
6670          #
6671          # See TransactionOptions.ReadOnly.read_timestamp and
6672          # TransactionOptions.ReadOnly.exact_staleness.
6673          #
6674          # ### Bounded Staleness
6675          #
6676          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
6677          # subject to a user-provided staleness bound. Cloud Spanner chooses the
6678          # newest timestamp within the staleness bound that allows execution
6679          # of the reads at the closest available replica without blocking.
6680          #
6681          # All rows yielded are consistent with each other -- if any part of
6682          # the read observes a transaction, all parts of the read see the
6683          # transaction. Boundedly stale reads are not repeatable: two stale
6684          # reads, even if they use the same staleness bound, can execute at
6685          # different timestamps and thus return inconsistent results.
6686          #
6687          # Boundedly stale reads execute in two phases: the first phase
6688          # negotiates a timestamp among all replicas needed to serve the
6689          # read. In the second phase, reads are executed at the negotiated
6690          # timestamp.
6691          #
6692          # As a result of the two phase execution, bounded staleness reads are
6693          # usually a little slower than comparable exact staleness
6694          # reads. However, they are typically able to return fresher
6695          # results, and are more likely to execute at the closest replica.
6696          #
6697          # Because the timestamp negotiation requires up-front knowledge of
6698          # which rows will be read, it can only be used with single-use
6699          # read-only transactions.
6700          #
6701          # See TransactionOptions.ReadOnly.max_staleness and
6702          # TransactionOptions.ReadOnly.min_read_timestamp.
6703          #
6704          # ### Old Read Timestamps and Garbage Collection
6705          #
6706          # Cloud Spanner continuously garbage collects deleted and overwritten data
6707          # in the background to reclaim storage space. This process is known
6708          # as "version GC". By default, version GC reclaims versions after they
6709          # are one hour old. Because of this, Cloud Spanner cannot perform reads
6710          # at read timestamps more than one hour in the past. This
6711          # restriction also applies to in-progress reads and/or SQL queries whose
6712          # timestamp become too old while executing. Reads and SQL queries with
6713          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
6714          #
6715          # ## Partitioned DML Transactions
6716          #
6717          # Partitioned DML transactions are used to execute DML statements with a
6718          # different execution strategy that provides different, and often better,
6719          # scalability properties for large, table-wide operations than DML in a
6720          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
6721          # should prefer using ReadWrite transactions.
6722          #
6723          # Partitioned DML partitions the keyspace and runs the DML statement on each
6724          # partition in separate, internal transactions. These transactions commit
6725          # automatically when complete, and run independently from one another.
6726          #
6727          # To reduce lock contention, this execution strategy only acquires read locks
6728          # on rows that match the WHERE clause of the statement. Additionally, the
6729          # smaller per-partition transactions hold locks for less time.
6730          #
6731          # That said, Partitioned DML is not a drop-in replacement for standard DML used
6732          # in ReadWrite transactions.
6733          #
6734          #  - The DML statement must be fully-partitionable. Specifically, the statement
6735          #    must be expressible as the union of many statements which each access only
6736          #    a single row of the table.
6737          #
6738          #  - The statement is not applied atomically to all rows of the table. Rather,
6739          #    the statement is applied atomically to partitions of the table, in
6740          #    independent transactions. Secondary index rows are updated atomically
6741          #    with the base table rows.
6742          #
6743          #  - Partitioned DML does not guarantee exactly-once execution semantics
6744          #    against a partition. The statement will be applied at least once to each
6745          #    partition. It is strongly recommended that the DML statement should be
6746          #    idempotent to avoid unexpected results. For instance, it is potentially
6747          #    dangerous to run a statement such as
6748          #    `UPDATE table SET column = column + 1` as it could be run multiple times
6749          #    against some rows.
6750          #
6751          #  - The partitions are committed automatically - there is no support for
6752          #    Commit or Rollback. If the call returns an error, or if the client issuing
6753          #    the ExecuteSql call dies, it is possible that some rows had the statement
6754          #    executed on them successfully. It is also possible that statement was
6755          #    never executed against other rows.
6756          #
6757          #  - Partitioned DML transactions may only contain the execution of a single
6758          #    DML statement via ExecuteSql or ExecuteStreamingSql.
6759          #
6760          #  - If any error is encountered during the execution of the partitioned DML
6761          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
6762          #    value that cannot be stored due to schema constraints), then the
6763          #    operation is stopped at that point and an error is returned. It is
6764          #    possible that at this point, some partitions have been committed (or even
6765          #    committed multiple times), and other partitions have not been run at all.
6766          #
6767          # Given the above, Partitioned DML is good fit for large, database-wide,
6768          # operations that are idempotent, such as deleting old rows from a very large
6769          # table.
6770        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
6771            #
6772            # Authorization to begin a read-write transaction requires
6773            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
6774            # on the `session` resource.
6775            # transaction type has no options.
6776        },
6777        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
6778            #
6779            # Authorization to begin a read-only transaction requires
6780            # `spanner.databases.beginReadOnlyTransaction` permission
6781            # on the `session` resource.
6782          "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
6783              #
6784              # This is useful for requesting fresher data than some previous
6785              # read, or data that is fresh enough to observe the effects of some
6786              # previously committed transaction whose timestamp is known.
6787              #
6788              # Note that this option can only be used in single-use transactions.
6789              #
6790              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
6791              # Example: `"2014-10-02T15:01:23.045123456Z"`.
6792          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
6793              # the Transaction message that describes the transaction.
6794          "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
6795              # seconds. Guarantees that all writes that have committed more
6796              # than the specified number of seconds ago are visible. Because
6797              # Cloud Spanner chooses the exact timestamp, this mode works even if
6798              # the client's local clock is substantially skewed from Cloud Spanner
6799              # commit timestamps.
6800              #
6801              # Useful for reading the freshest data available at a nearby
6802              # replica, while bounding the possible staleness if the local
6803              # replica has fallen behind.
6804              #
6805              # Note that this option can only be used in single-use
6806              # transactions.
6807          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
6808              # old. The timestamp is chosen soon after the read is started.
6809              #
6810              # Guarantees that all writes that have committed more than the
6811              # specified number of seconds ago are visible. Because Cloud Spanner
6812              # chooses the exact timestamp, this mode works even if the client's
6813              # local clock is substantially skewed from Cloud Spanner commit
6814              # timestamps.
6815              #
6816              # Useful for reading at nearby replicas without the distributed
6817              # timestamp negotiation overhead of `max_staleness`.
6818          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
6819              # reads at a specific timestamp are repeatable; the same read at
6820              # the same timestamp always returns the same data. If the
6821              # timestamp is in the future, the read will block until the
6822              # specified timestamp, modulo the read's deadline.
6823              #
6824              # Useful for large scale consistent reads such as mapreduces, or
6825              # for coordinating many reads against a consistent snapshot of the
6826              # data.
6827              #
6828              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
6829              # Example: `"2014-10-02T15:01:23.045123456Z"`.
6830          "strong": True or False, # Read at a timestamp where all previously committed transactions
6831              # are visible.
6832        },
6833        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
6834            #
6835            # Authorization to begin a Partitioned DML transaction requires
6836            # `spanner.databases.beginPartitionedDmlTransaction` permission
6837            # on the `session` resource.
6838        },
6839      },
6840      "id": "A String", # Execute the read or SQL query in a previously-started transaction.
6841    },
6842    "resumeToken": "A String", # If this request is resuming a previously interrupted read,
6843        # `resume_token` should be copied from the last
6844        # PartialResultSet yielded before the interruption. Doing this
6845        # enables the new read to resume where the last read left off. The
6846        # rest of the request parameters must exactly match the request
6847        # that yielded this token.
6848    "partitionToken": "A String", # If present, results will be restricted to the specified partition
6849        # previously created using PartitionRead().    There must be an exact
6850        # match for the values of fields common to this message and the
6851        # PartitionReadRequest message used to create this partition_token.
6852    "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
6853        # primary keys of the rows in table to be yielded, unless index
6854        # is present. If index is present, then key_set instead names
6855        # index keys in index.
6856        #
6857        # If the partition_token field is empty, rows are yielded
6858        # in table primary key order (if index is empty) or index key order
6859        # (if index is non-empty).  If the partition_token field is not
6860        # empty, rows will be yielded in an unspecified order.
6861        #
6862        # It is not an error for the `key_set` to name rows that do not
6863        # exist in the database. Read yields nothing for nonexistent rows.
6864        # the keys are expected to be in the same table or index. The keys need
6865        # not be sorted in any particular way.
6866        #
6867        # If the same key is specified multiple times in the set (for example
6868        # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
6869        # behaves as if the key were only specified once.
6870      "ranges": [ # A list of key ranges. See KeyRange for more information about
6871          # key range specifications.
6872        { # KeyRange represents a range of rows in a table or index.
6873            #
6874            # A range has a start key and an end key. These keys can be open or
6875            # closed, indicating if the range includes rows with that key.
6876            #
6877            # Keys are represented by lists, where the ith value in the list
6878            # corresponds to the ith component of the table or index primary key.
6879            # Individual values are encoded as described
6880            # here.
6881            #
6882            # For example, consider the following table definition:
6883            #
6884            #     CREATE TABLE UserEvents (
6885            #       UserName STRING(MAX),
6886            #       EventDate STRING(10)
6887            #     ) PRIMARY KEY(UserName, EventDate);
6888            #
6889            # The following keys name rows in this table:
6890            #
6891            #     "Bob", "2014-09-23"
6892            #
6893            # Since the `UserEvents` table's `PRIMARY KEY` clause names two
6894            # columns, each `UserEvents` key has two elements; the first is the
6895            # `UserName`, and the second is the `EventDate`.
6896            #
6897            # Key ranges with multiple components are interpreted
6898            # lexicographically by component using the table or index key's declared
6899            # sort order. For example, the following range returns all events for
6900            # user `"Bob"` that occurred in the year 2015:
6901            #
6902            #     "start_closed": ["Bob", "2015-01-01"]
6903            #     "end_closed": ["Bob", "2015-12-31"]
6904            #
6905            # Start and end keys can omit trailing key components. This affects the
6906            # inclusion and exclusion of rows that exactly match the provided key
6907            # components: if the key is closed, then rows that exactly match the
6908            # provided components are included; if the key is open, then rows
6909            # that exactly match are not included.
6910            #
6911            # For example, the following range includes all events for `"Bob"` that
6912            # occurred during and after the year 2000:
6913            #
6914            #     "start_closed": ["Bob", "2000-01-01"]
6915            #     "end_closed": ["Bob"]
6916            #
6917            # The next example retrieves all events for `"Bob"`:
6918            #
6919            #     "start_closed": ["Bob"]
6920            #     "end_closed": ["Bob"]
6921            #
6922            # To retrieve events before the year 2000:
6923            #
6924            #     "start_closed": ["Bob"]
6925            #     "end_open": ["Bob", "2000-01-01"]
6926            #
6927            # The following range includes all rows in the table:
6928            #
6929            #     "start_closed": []
6930            #     "end_closed": []
6931            #
6932            # This range returns all users whose `UserName` begins with any
6933            # character from A to C:
6934            #
6935            #     "start_closed": ["A"]
6936            #     "end_open": ["D"]
6937            #
6938            # This range returns all users whose `UserName` begins with B:
6939            #
6940            #     "start_closed": ["B"]
6941            #     "end_open": ["C"]
6942            #
6943            # Key ranges honor column sort order. For example, suppose a table is
6944            # defined as follows:
6945            #
6946            #     CREATE TABLE DescendingSortedTable {
6947            #       Key INT64,
6948            #       ...
6949            #     ) PRIMARY KEY(Key DESC);
6950            #
6951            # The following range retrieves all rows with key values between 1
6952            # and 100 inclusive:
6953            #
6954            #     "start_closed": ["100"]
6955            #     "end_closed": ["1"]
6956            #
6957            # Note that 100 is passed as the start, and 1 is passed as the end,
6958            # because `Key` is a descending column in the schema.
6959          "endOpen": [ # If the end is open, then the range excludes rows whose first
6960              # `len(end_open)` key columns exactly match `end_open`.
6961            "",
6962          ],
6963          "startOpen": [ # If the start is open, then the range excludes rows whose first
6964              # `len(start_open)` key columns exactly match `start_open`.
6965            "",
6966          ],
6967          "endClosed": [ # If the end is closed, then the range includes all rows whose
6968              # first `len(end_closed)` key columns exactly match `end_closed`.
6969            "",
6970          ],
6971          "startClosed": [ # If the start is closed, then the range includes all rows whose
6972              # first `len(start_closed)` key columns exactly match `start_closed`.
6973            "",
6974          ],
6975        },
6976      ],
6977      "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
6978          # many elements as there are columns in the primary or index key
6979          # with which this `KeySet` is used.  Individual key values are
6980          # encoded as described here.
6981        [
6982          "",
6983        ],
6984      ],
6985      "all": True or False, # For convenience `all` can be set to `true` to indicate that this
6986          # `KeySet` matches all keys in the table or index. Note that any keys
6987          # specified in `keys` or `ranges` are only yielded once.
6988    },
6989    "limit": "A String", # If greater than zero, only the first `limit` rows are yielded. If `limit`
6990        # is zero, the default is no limit. A limit cannot be specified if
6991        # `partition_token` is set.
6992    "table": "A String", # Required. The name of the table in the database to be read.
6993    "columns": [ # The columns of table to be returned for each row matching
6994        # this request.
6995      "A String",
6996    ],
6997  }
6998
6999  x__xgafv: string, V1 error format.
7000    Allowed values
7001      1 - v1 error format
7002      2 - v2 error format
7003
7004Returns:
7005  An object of the form:
7006
7007    { # Results from Read or
7008      # ExecuteSql.
7009    "rows": [ # Each element in `rows` is a row whose format is defined by
7010        # metadata.row_type. The ith element
7011        # in each row matches the ith field in
7012        # metadata.row_type. Elements are
7013        # encoded based on type as described
7014        # here.
7015      [
7016        "",
7017      ],
7018    ],
7019    "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
7020        # produced this result set. These can be requested by setting
7021        # ExecuteSqlRequest.query_mode.
7022        # DML statements always produce stats containing the number of rows
7023        # modified, unless executed using the
7024        # ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
7025        # Other fields may or may not be populated, based on the
7026        # ExecuteSqlRequest.query_mode.
7027      "rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
7028          # returns a lower bound of the rows modified.
7029      "rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
7030      "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
7031        "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
7032            # with the plan root. Each PlanNode's `id` corresponds to its index in
7033            # `plan_nodes`.
7034          { # Node information for nodes appearing in a QueryPlan.plan_nodes.
7035            "index": 42, # The `PlanNode`'s index in node list.
7036            "kind": "A String", # Used to determine the type of node. May be needed for visualizing
7037                # different kinds of nodes differently. For example, If the node is a
7038                # SCALAR node, it will have a condensed representation
7039                # which can be used to directly embed a description of the node in its
7040                # parent.
7041            "displayName": "A String", # The display name for the node.
7042            "executionStats": { # The execution statistics associated with the node, contained in a group of
7043                # key-value pairs. Only present if the plan was returned as a result of a
7044                # profile query. For example, number of executions, number of rows/time per
7045                # execution etc.
7046              "a_key": "", # Properties of the object.
7047            },
7048            "childLinks": [ # List of child node `index`es and their relationship to this parent.
7049              { # Metadata associated with a parent-child relationship appearing in a
7050                  # PlanNode.
7051                "variable": "A String", # Only present if the child node is SCALAR and corresponds
7052                    # to an output variable of the parent node. The field carries the name of
7053                    # the output variable.
7054                    # For example, a `TableScan` operator that reads rows from a table will
7055                    # have child links to the `SCALAR` nodes representing the output variables
7056                    # created for each column that is read by the operator. The corresponding
7057                    # `variable` fields will be set to the variable names assigned to the
7058                    # columns.
7059                "childIndex": 42, # The node to which the link points.
7060                "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
7061                    # distinguish between the build child and the probe child, or in the case
7062                    # of the child being an output variable, to represent the tag associated
7063                    # with the output variable.
7064              },
7065            ],
7066            "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
7067                # `SCALAR` PlanNode(s).
7068              "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
7069                  # where the `description` string of this node references a `SCALAR`
7070                  # subquery contained in the expression subtree rooted at this node. The
7071                  # referenced `SCALAR` subquery may not necessarily be a direct child of
7072                  # this node.
7073                "a_key": 42,
7074              },
7075              "description": "A String", # A string representation of the expression subtree rooted at this node.
7076            },
7077            "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
7078                # For example, a Parameter Reference node could have the following
7079                # information in its metadata:
7080                #
7081                #     {
7082                #       "parameter_reference": "param1",
7083                #       "parameter_type": "array"
7084                #     }
7085              "a_key": "", # Properties of the object.
7086            },
7087          },
7088        ],
7089      },
7090      "queryStats": { # Aggregated statistics from the execution of the query. Only present when
7091          # the query is profiled. For example, a query could return the statistics as
7092          # follows:
7093          #
7094          #     {
7095          #       "rows_returned": "3",
7096          #       "elapsed_time": "1.22 secs",
7097          #       "cpu_time": "1.19 secs"
7098          #     }
7099        "a_key": "", # Properties of the object.
7100      },
7101    },
7102    "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
7103      "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
7104          # set.  For example, a SQL query like `"SELECT UserId, UserName FROM
7105          # Users"` could return a `row_type` value like:
7106          #
7107          #     "fields": [
7108          #       { "name": "UserId", "type": { "code": "INT64" } },
7109          #       { "name": "UserName", "type": { "code": "STRING" } },
7110          #     ]
7111        "fields": [ # The list of fields that make up this struct. Order is
7112            # significant, because values of this struct type are represented as
7113            # lists, where the order of field values matches the order of
7114            # fields in the StructType. In turn, the order of fields
7115            # matches the order of columns in a read request, or the order of
7116            # fields in the `SELECT` clause of a query.
7117          { # Message representing a single field of a struct.
7118            "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
7119                # table cell or returned from an SQL query.
7120              "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
7121                  # provides type information for the struct's fields.
7122              "code": "A String", # Required. The TypeCode for this type.
7123              "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
7124                  # is the type of the array elements.
7125            },
7126            "name": "A String", # The name of the field. For reads, this is the column name. For
7127                # SQL queries, it is the column alias (e.g., `"Word"` in the
7128                # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
7129                # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
7130                # columns might have an empty name (e.g., !"SELECT
7131                # UPPER(ColName)"`). Note that a query result can contain
7132                # multiple fields with the same name.
7133          },
7134        ],
7135      },
7136      "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
7137          # information about the new transaction is yielded here.
7138        "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
7139            # for the transaction. Not returned by default: see
7140            # TransactionOptions.ReadOnly.return_read_timestamp.
7141            #
7142            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
7143            # Example: `"2014-10-02T15:01:23.045123456Z"`.
7144        "id": "A String", # `id` may be used to identify the transaction in subsequent
7145            # Read,
7146            # ExecuteSql,
7147            # Commit, or
7148            # Rollback calls.
7149            #
7150            # Single-use read-only transactions do not have IDs, because
7151            # single-use transactions do not support multiple requests.
7152      },
7153    },
7154  }</pre>
7155</div>
7156
7157<div class="method">
7158    <code class="details" id="rollback">rollback(session, body, x__xgafv=None)</code>
7159  <pre>Rolls back a transaction, releasing any locks it holds. It is a good
7160idea to call this for any transaction that includes one or more
7161Read or ExecuteSql requests and
7162ultimately decides not to commit.
7163
7164`Rollback` returns `OK` if it successfully aborts the transaction, the
7165transaction was already aborted, or the transaction is not
7166found. `Rollback` never returns `ABORTED`.
7167
7168Args:
7169  session: string, Required. The session in which the transaction to roll back is running. (required)
7170  body: object, The request body. (required)
7171    The object takes the form of:
7172
7173{ # The request for Rollback.
7174    "transactionId": "A String", # Required. The transaction to roll back.
7175  }
7176
7177  x__xgafv: string, V1 error format.
7178    Allowed values
7179      1 - v1 error format
7180      2 - v2 error format
7181
7182Returns:
7183  An object of the form:
7184
7185    { # A generic empty message that you can re-use to avoid defining duplicated
7186      # empty messages in your APIs. A typical example is to use it as the request
7187      # or the response type of an API method. For instance:
7188      #
7189      #     service Foo {
7190      #       rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
7191      #     }
7192      #
7193      # The JSON representation for `Empty` is empty JSON object `{}`.
7194  }</pre>
7195</div>
7196
7197<div class="method">
7198    <code class="details" id="streamingRead">streamingRead(session, body, x__xgafv=None)</code>
7199  <pre>Like Read, except returns the result set as a
7200stream. Unlike Read, there is no limit on the
7201size of the returned result set. However, no individual row in
7202the result set can exceed 100 MiB, and no column value can exceed
720310 MiB.
7204
7205Args:
7206  session: string, Required. The session in which the read should be performed. (required)
7207  body: object, The request body. (required)
7208    The object takes the form of:
7209
7210{ # The request for Read and
7211      # StreamingRead.
7212    "index": "A String", # If non-empty, the name of an index on table. This index is
7213        # used instead of the table primary key when interpreting key_set
7214        # and sorting result rows. See key_set for further information.
7215    "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
7216        # temporary read-only transaction with strong concurrency.
7217        # Read or
7218        # ExecuteSql call runs.
7219        #
7220        # See TransactionOptions for more information about transactions.
7221      "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
7222          # it. The transaction ID of the new transaction is returned in
7223          # ResultSetMetadata.transaction, which is a Transaction.
7224          #
7225          #
7226          # Each session can have at most one active transaction at a time. After the
7227          # active transaction is completed, the session can immediately be
7228          # re-used for the next transaction. It is not necessary to create a
7229          # new session for each transaction.
7230          #
7231          # # Transaction Modes
7232          #
7233          # Cloud Spanner supports three transaction modes:
7234          #
7235          #   1. Locking read-write. This type of transaction is the only way
7236          #      to write data into Cloud Spanner. These transactions rely on
7237          #      pessimistic locking and, if necessary, two-phase commit.
7238          #      Locking read-write transactions may abort, requiring the
7239          #      application to retry.
7240          #
7241          #   2. Snapshot read-only. This transaction type provides guaranteed
7242          #      consistency across several reads, but does not allow
7243          #      writes. Snapshot read-only transactions can be configured to
7244          #      read at timestamps in the past. Snapshot read-only
7245          #      transactions do not need to be committed.
7246          #
7247          #   3. Partitioned DML. This type of transaction is used to execute
7248          #      a single Partitioned DML statement. Partitioned DML partitions
7249          #      the key space and runs the DML statement over each partition
7250          #      in parallel using separate, internal transactions that commit
7251          #      independently. Partitioned DML transactions do not need to be
7252          #      committed.
7253          #
7254          # For transactions that only read, snapshot read-only transactions
7255          # provide simpler semantics and are almost always faster. In
7256          # particular, read-only transactions do not take locks, so they do
7257          # not conflict with read-write transactions. As a consequence of not
7258          # taking locks, they also do not abort, so retry loops are not needed.
7259          #
7260          # Transactions may only read/write data in a single database. They
7261          # may, however, read/write data in different tables within that
7262          # database.
7263          #
7264          # ## Locking Read-Write Transactions
7265          #
7266          # Locking transactions may be used to atomically read-modify-write
7267          # data anywhere in a database. This type of transaction is externally
7268          # consistent.
7269          #
7270          # Clients should attempt to minimize the amount of time a transaction
7271          # is active. Faster transactions commit with higher probability
7272          # and cause less contention. Cloud Spanner attempts to keep read locks
7273          # active as long as the transaction continues to do reads, and the
7274          # transaction has not been terminated by
7275          # Commit or
7276          # Rollback.  Long periods of
7277          # inactivity at the client may cause Cloud Spanner to release a
7278          # transaction's locks and abort it.
7279          #
7280          # Conceptually, a read-write transaction consists of zero or more
7281          # reads or SQL statements followed by
7282          # Commit. At any time before
7283          # Commit, the client can send a
7284          # Rollback request to abort the
7285          # transaction.
7286          #
7287          # ### Semantics
7288          #
7289          # Cloud Spanner can commit the transaction if all read locks it acquired
7290          # are still valid at commit time, and it is able to acquire write
7291          # locks for all writes. Cloud Spanner can abort the transaction for any
7292          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
7293          # that the transaction has not modified any user data in Cloud Spanner.
7294          #
7295          # Unless the transaction commits, Cloud Spanner makes no guarantees about
7296          # how long the transaction's locks were held for. It is an error to
7297          # use Cloud Spanner locks for any sort of mutual exclusion other than
7298          # between Cloud Spanner transactions themselves.
7299          #
7300          # ### Retrying Aborted Transactions
7301          #
7302          # When a transaction aborts, the application can choose to retry the
7303          # whole transaction again. To maximize the chances of successfully
7304          # committing the retry, the client should execute the retry in the
7305          # same session as the original attempt. The original session's lock
7306          # priority increases with each consecutive abort, meaning that each
7307          # attempt has a slightly better chance of success than the previous.
7308          #
7309          # Under some circumstances (e.g., many transactions attempting to
7310          # modify the same row(s)), a transaction can abort many times in a
7311          # short period before successfully committing. Thus, it is not a good
7312          # idea to cap the number of retries a transaction can attempt;
7313          # instead, it is better to limit the total amount of wall time spent
7314          # retrying.
7315          #
7316          # ### Idle Transactions
7317          #
7318          # A transaction is considered idle if it has no outstanding reads or
7319          # SQL queries and has not started a read or SQL query within the last 10
7320          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
7321          # don't hold on to locks indefinitely. In that case, the commit will
7322          # fail with error `ABORTED`.
7323          #
7324          # If this behavior is undesirable, periodically executing a simple
7325          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
7326          # transaction from becoming idle.
7327          #
7328          # ## Snapshot Read-Only Transactions
7329          #
7330          # Snapshot read-only transactions provides a simpler method than
7331          # locking read-write transactions for doing several consistent
7332          # reads. However, this type of transaction does not support writes.
7333          #
7334          # Snapshot transactions do not take locks. Instead, they work by
7335          # choosing a Cloud Spanner timestamp, then executing all reads at that
7336          # timestamp. Since they do not acquire locks, they do not block
7337          # concurrent read-write transactions.
7338          #
7339          # Unlike locking read-write transactions, snapshot read-only
7340          # transactions never abort. They can fail if the chosen read
7341          # timestamp is garbage collected; however, the default garbage
7342          # collection policy is generous enough that most applications do not
7343          # need to worry about this in practice.
7344          #
7345          # Snapshot read-only transactions do not need to call
7346          # Commit or
7347          # Rollback (and in fact are not
7348          # permitted to do so).
7349          #
7350          # To execute a snapshot transaction, the client specifies a timestamp
7351          # bound, which tells Cloud Spanner how to choose a read timestamp.
7352          #
7353          # The types of timestamp bound are:
7354          #
7355          #   - Strong (the default).
7356          #   - Bounded staleness.
7357          #   - Exact staleness.
7358          #
7359          # If the Cloud Spanner database to be read is geographically distributed,
7360          # stale read-only transactions can execute more quickly than strong
7361          # or read-write transaction, because they are able to execute far
7362          # from the leader replica.
7363          #
7364          # Each type of timestamp bound is discussed in detail below.
7365          #
7366          # ### Strong
7367          #
7368          # Strong reads are guaranteed to see the effects of all transactions
7369          # that have committed before the start of the read. Furthermore, all
7370          # rows yielded by a single read are consistent with each other -- if
7371          # any part of the read observes a transaction, all parts of the read
7372          # see the transaction.
7373          #
7374          # Strong reads are not repeatable: two consecutive strong read-only
7375          # transactions might return inconsistent results if there are
7376          # concurrent writes. If consistency across reads is required, the
7377          # reads should be executed within a transaction or at an exact read
7378          # timestamp.
7379          #
7380          # See TransactionOptions.ReadOnly.strong.
7381          #
7382          # ### Exact Staleness
7383          #
7384          # These timestamp bounds execute reads at a user-specified
7385          # timestamp. Reads at a timestamp are guaranteed to see a consistent
7386          # prefix of the global transaction history: they observe
7387          # modifications done by all transactions with a commit timestamp <=
7388          # the read timestamp, and observe none of the modifications done by
7389          # transactions with a larger commit timestamp. They will block until
7390          # all conflicting transactions that may be assigned commit timestamps
7391          # <= the read timestamp have finished.
7392          #
7393          # The timestamp can either be expressed as an absolute Cloud Spanner commit
7394          # timestamp or a staleness relative to the current time.
7395          #
7396          # These modes do not require a "negotiation phase" to pick a
7397          # timestamp. As a result, they execute slightly faster than the
7398          # equivalent boundedly stale concurrency modes. On the other hand,
7399          # boundedly stale reads usually return fresher results.
7400          #
7401          # See TransactionOptions.ReadOnly.read_timestamp and
7402          # TransactionOptions.ReadOnly.exact_staleness.
7403          #
7404          # ### Bounded Staleness
7405          #
7406          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
7407          # subject to a user-provided staleness bound. Cloud Spanner chooses the
7408          # newest timestamp within the staleness bound that allows execution
7409          # of the reads at the closest available replica without blocking.
7410          #
7411          # All rows yielded are consistent with each other -- if any part of
7412          # the read observes a transaction, all parts of the read see the
7413          # transaction. Boundedly stale reads are not repeatable: two stale
7414          # reads, even if they use the same staleness bound, can execute at
7415          # different timestamps and thus return inconsistent results.
7416          #
7417          # Boundedly stale reads execute in two phases: the first phase
7418          # negotiates a timestamp among all replicas needed to serve the
7419          # read. In the second phase, reads are executed at the negotiated
7420          # timestamp.
7421          #
7422          # As a result of the two phase execution, bounded staleness reads are
7423          # usually a little slower than comparable exact staleness
7424          # reads. However, they are typically able to return fresher
7425          # results, and are more likely to execute at the closest replica.
7426          #
7427          # Because the timestamp negotiation requires up-front knowledge of
7428          # which rows will be read, it can only be used with single-use
7429          # read-only transactions.
7430          #
7431          # See TransactionOptions.ReadOnly.max_staleness and
7432          # TransactionOptions.ReadOnly.min_read_timestamp.
7433          #
7434          # ### Old Read Timestamps and Garbage Collection
7435          #
7436          # Cloud Spanner continuously garbage collects deleted and overwritten data
7437          # in the background to reclaim storage space. This process is known
7438          # as "version GC". By default, version GC reclaims versions after they
7439          # are one hour old. Because of this, Cloud Spanner cannot perform reads
7440          # at read timestamps more than one hour in the past. This
7441          # restriction also applies to in-progress reads and/or SQL queries whose
7442          # timestamp become too old while executing. Reads and SQL queries with
7443          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
7444          #
7445          # ## Partitioned DML Transactions
7446          #
7447          # Partitioned DML transactions are used to execute DML statements with a
7448          # different execution strategy that provides different, and often better,
7449          # scalability properties for large, table-wide operations than DML in a
7450          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
7451          # should prefer using ReadWrite transactions.
7452          #
7453          # Partitioned DML partitions the keyspace and runs the DML statement on each
7454          # partition in separate, internal transactions. These transactions commit
7455          # automatically when complete, and run independently from one another.
7456          #
7457          # To reduce lock contention, this execution strategy only acquires read locks
7458          # on rows that match the WHERE clause of the statement. Additionally, the
7459          # smaller per-partition transactions hold locks for less time.
7460          #
7461          # That said, Partitioned DML is not a drop-in replacement for standard DML used
7462          # in ReadWrite transactions.
7463          #
7464          #  - The DML statement must be fully-partitionable. Specifically, the statement
7465          #    must be expressible as the union of many statements which each access only
7466          #    a single row of the table.
7467          #
7468          #  - The statement is not applied atomically to all rows of the table. Rather,
7469          #    the statement is applied atomically to partitions of the table, in
7470          #    independent transactions. Secondary index rows are updated atomically
7471          #    with the base table rows.
7472          #
7473          #  - Partitioned DML does not guarantee exactly-once execution semantics
7474          #    against a partition. The statement will be applied at least once to each
7475          #    partition. It is strongly recommended that the DML statement should be
7476          #    idempotent to avoid unexpected results. For instance, it is potentially
7477          #    dangerous to run a statement such as
7478          #    `UPDATE table SET column = column + 1` as it could be run multiple times
7479          #    against some rows.
7480          #
7481          #  - The partitions are committed automatically - there is no support for
7482          #    Commit or Rollback. If the call returns an error, or if the client issuing
7483          #    the ExecuteSql call dies, it is possible that some rows had the statement
7484          #    executed on them successfully. It is also possible that statement was
7485          #    never executed against other rows.
7486          #
7487          #  - Partitioned DML transactions may only contain the execution of a single
7488          #    DML statement via ExecuteSql or ExecuteStreamingSql.
7489          #
7490          #  - If any error is encountered during the execution of the partitioned DML
7491          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
7492          #    value that cannot be stored due to schema constraints), then the
7493          #    operation is stopped at that point and an error is returned. It is
7494          #    possible that at this point, some partitions have been committed (or even
7495          #    committed multiple times), and other partitions have not been run at all.
7496          #
7497          # Given the above, Partitioned DML is good fit for large, database-wide,
7498          # operations that are idempotent, such as deleting old rows from a very large
7499          # table.
7500        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
7501            #
7502            # Authorization to begin a read-write transaction requires
7503            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
7504            # on the `session` resource.
7505            # transaction type has no options.
7506        },
7507        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
7508            #
7509            # Authorization to begin a read-only transaction requires
7510            # `spanner.databases.beginReadOnlyTransaction` permission
7511            # on the `session` resource.
7512          "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
7513              #
7514              # This is useful for requesting fresher data than some previous
7515              # read, or data that is fresh enough to observe the effects of some
7516              # previously committed transaction whose timestamp is known.
7517              #
7518              # Note that this option can only be used in single-use transactions.
7519              #
7520              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
7521              # Example: `"2014-10-02T15:01:23.045123456Z"`.
7522          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
7523              # the Transaction message that describes the transaction.
7524          "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
7525              # seconds. Guarantees that all writes that have committed more
7526              # than the specified number of seconds ago are visible. Because
7527              # Cloud Spanner chooses the exact timestamp, this mode works even if
7528              # the client's local clock is substantially skewed from Cloud Spanner
7529              # commit timestamps.
7530              #
7531              # Useful for reading the freshest data available at a nearby
7532              # replica, while bounding the possible staleness if the local
7533              # replica has fallen behind.
7534              #
7535              # Note that this option can only be used in single-use
7536              # transactions.
7537          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
7538              # old. The timestamp is chosen soon after the read is started.
7539              #
7540              # Guarantees that all writes that have committed more than the
7541              # specified number of seconds ago are visible. Because Cloud Spanner
7542              # chooses the exact timestamp, this mode works even if the client's
7543              # local clock is substantially skewed from Cloud Spanner commit
7544              # timestamps.
7545              #
7546              # Useful for reading at nearby replicas without the distributed
7547              # timestamp negotiation overhead of `max_staleness`.
7548          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
7549              # reads at a specific timestamp are repeatable; the same read at
7550              # the same timestamp always returns the same data. If the
7551              # timestamp is in the future, the read will block until the
7552              # specified timestamp, modulo the read's deadline.
7553              #
7554              # Useful for large scale consistent reads such as mapreduces, or
7555              # for coordinating many reads against a consistent snapshot of the
7556              # data.
7557              #
7558              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
7559              # Example: `"2014-10-02T15:01:23.045123456Z"`.
7560          "strong": True or False, # Read at a timestamp where all previously committed transactions
7561              # are visible.
7562        },
7563        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
7564            #
7565            # Authorization to begin a Partitioned DML transaction requires
7566            # `spanner.databases.beginPartitionedDmlTransaction` permission
7567            # on the `session` resource.
7568        },
7569      },
7570      "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
7571          # This is the most efficient way to execute a transaction that
7572          # consists of a single SQL query.
7573          #
7574          #
7575          # Each session can have at most one active transaction at a time. After the
7576          # active transaction is completed, the session can immediately be
7577          # re-used for the next transaction. It is not necessary to create a
7578          # new session for each transaction.
7579          #
7580          # # Transaction Modes
7581          #
7582          # Cloud Spanner supports three transaction modes:
7583          #
7584          #   1. Locking read-write. This type of transaction is the only way
7585          #      to write data into Cloud Spanner. These transactions rely on
7586          #      pessimistic locking and, if necessary, two-phase commit.
7587          #      Locking read-write transactions may abort, requiring the
7588          #      application to retry.
7589          #
7590          #   2. Snapshot read-only. This transaction type provides guaranteed
7591          #      consistency across several reads, but does not allow
7592          #      writes. Snapshot read-only transactions can be configured to
7593          #      read at timestamps in the past. Snapshot read-only
7594          #      transactions do not need to be committed.
7595          #
7596          #   3. Partitioned DML. This type of transaction is used to execute
7597          #      a single Partitioned DML statement. Partitioned DML partitions
7598          #      the key space and runs the DML statement over each partition
7599          #      in parallel using separate, internal transactions that commit
7600          #      independently. Partitioned DML transactions do not need to be
7601          #      committed.
7602          #
7603          # For transactions that only read, snapshot read-only transactions
7604          # provide simpler semantics and are almost always faster. In
7605          # particular, read-only transactions do not take locks, so they do
7606          # not conflict with read-write transactions. As a consequence of not
7607          # taking locks, they also do not abort, so retry loops are not needed.
7608          #
7609          # Transactions may only read/write data in a single database. They
7610          # may, however, read/write data in different tables within that
7611          # database.
7612          #
7613          # ## Locking Read-Write Transactions
7614          #
7615          # Locking transactions may be used to atomically read-modify-write
7616          # data anywhere in a database. This type of transaction is externally
7617          # consistent.
7618          #
7619          # Clients should attempt to minimize the amount of time a transaction
7620          # is active. Faster transactions commit with higher probability
7621          # and cause less contention. Cloud Spanner attempts to keep read locks
7622          # active as long as the transaction continues to do reads, and the
7623          # transaction has not been terminated by
7624          # Commit or
7625          # Rollback.  Long periods of
7626          # inactivity at the client may cause Cloud Spanner to release a
7627          # transaction's locks and abort it.
7628          #
7629          # Conceptually, a read-write transaction consists of zero or more
7630          # reads or SQL statements followed by
7631          # Commit. At any time before
7632          # Commit, the client can send a
7633          # Rollback request to abort the
7634          # transaction.
7635          #
7636          # ### Semantics
7637          #
7638          # Cloud Spanner can commit the transaction if all read locks it acquired
7639          # are still valid at commit time, and it is able to acquire write
7640          # locks for all writes. Cloud Spanner can abort the transaction for any
7641          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
7642          # that the transaction has not modified any user data in Cloud Spanner.
7643          #
7644          # Unless the transaction commits, Cloud Spanner makes no guarantees about
7645          # how long the transaction's locks were held for. It is an error to
7646          # use Cloud Spanner locks for any sort of mutual exclusion other than
7647          # between Cloud Spanner transactions themselves.
7648          #
7649          # ### Retrying Aborted Transactions
7650          #
7651          # When a transaction aborts, the application can choose to retry the
7652          # whole transaction again. To maximize the chances of successfully
7653          # committing the retry, the client should execute the retry in the
7654          # same session as the original attempt. The original session's lock
7655          # priority increases with each consecutive abort, meaning that each
7656          # attempt has a slightly better chance of success than the previous.
7657          #
7658          # Under some circumstances (e.g., many transactions attempting to
7659          # modify the same row(s)), a transaction can abort many times in a
7660          # short period before successfully committing. Thus, it is not a good
7661          # idea to cap the number of retries a transaction can attempt;
7662          # instead, it is better to limit the total amount of wall time spent
7663          # retrying.
7664          #
7665          # ### Idle Transactions
7666          #
7667          # A transaction is considered idle if it has no outstanding reads or
7668          # SQL queries and has not started a read or SQL query within the last 10
7669          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
7670          # don't hold on to locks indefinitely. In that case, the commit will
7671          # fail with error `ABORTED`.
7672          #
7673          # If this behavior is undesirable, periodically executing a simple
7674          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
7675          # transaction from becoming idle.
7676          #
7677          # ## Snapshot Read-Only Transactions
7678          #
7679          # Snapshot read-only transactions provides a simpler method than
7680          # locking read-write transactions for doing several consistent
7681          # reads. However, this type of transaction does not support writes.
7682          #
7683          # Snapshot transactions do not take locks. Instead, they work by
7684          # choosing a Cloud Spanner timestamp, then executing all reads at that
7685          # timestamp. Since they do not acquire locks, they do not block
7686          # concurrent read-write transactions.
7687          #
7688          # Unlike locking read-write transactions, snapshot read-only
7689          # transactions never abort. They can fail if the chosen read
7690          # timestamp is garbage collected; however, the default garbage
7691          # collection policy is generous enough that most applications do not
7692          # need to worry about this in practice.
7693          #
7694          # Snapshot read-only transactions do not need to call
7695          # Commit or
7696          # Rollback (and in fact are not
7697          # permitted to do so).
7698          #
7699          # To execute a snapshot transaction, the client specifies a timestamp
7700          # bound, which tells Cloud Spanner how to choose a read timestamp.
7701          #
7702          # The types of timestamp bound are:
7703          #
7704          #   - Strong (the default).
7705          #   - Bounded staleness.
7706          #   - Exact staleness.
7707          #
7708          # If the Cloud Spanner database to be read is geographically distributed,
7709          # stale read-only transactions can execute more quickly than strong
7710          # or read-write transaction, because they are able to execute far
7711          # from the leader replica.
7712          #
7713          # Each type of timestamp bound is discussed in detail below.
7714          #
7715          # ### Strong
7716          #
7717          # Strong reads are guaranteed to see the effects of all transactions
7718          # that have committed before the start of the read. Furthermore, all
7719          # rows yielded by a single read are consistent with each other -- if
7720          # any part of the read observes a transaction, all parts of the read
7721          # see the transaction.
7722          #
7723          # Strong reads are not repeatable: two consecutive strong read-only
7724          # transactions might return inconsistent results if there are
7725          # concurrent writes. If consistency across reads is required, the
7726          # reads should be executed within a transaction or at an exact read
7727          # timestamp.
7728          #
7729          # See TransactionOptions.ReadOnly.strong.
7730          #
7731          # ### Exact Staleness
7732          #
7733          # These timestamp bounds execute reads at a user-specified
7734          # timestamp. Reads at a timestamp are guaranteed to see a consistent
7735          # prefix of the global transaction history: they observe
7736          # modifications done by all transactions with a commit timestamp <=
7737          # the read timestamp, and observe none of the modifications done by
7738          # transactions with a larger commit timestamp. They will block until
7739          # all conflicting transactions that may be assigned commit timestamps
7740          # <= the read timestamp have finished.
7741          #
7742          # The timestamp can either be expressed as an absolute Cloud Spanner commit
7743          # timestamp or a staleness relative to the current time.
7744          #
7745          # These modes do not require a "negotiation phase" to pick a
7746          # timestamp. As a result, they execute slightly faster than the
7747          # equivalent boundedly stale concurrency modes. On the other hand,
7748          # boundedly stale reads usually return fresher results.
7749          #
7750          # See TransactionOptions.ReadOnly.read_timestamp and
7751          # TransactionOptions.ReadOnly.exact_staleness.
7752          #
7753          # ### Bounded Staleness
7754          #
7755          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
7756          # subject to a user-provided staleness bound. Cloud Spanner chooses the
7757          # newest timestamp within the staleness bound that allows execution
7758          # of the reads at the closest available replica without blocking.
7759          #
7760          # All rows yielded are consistent with each other -- if any part of
7761          # the read observes a transaction, all parts of the read see the
7762          # transaction. Boundedly stale reads are not repeatable: two stale
7763          # reads, even if they use the same staleness bound, can execute at
7764          # different timestamps and thus return inconsistent results.
7765          #
7766          # Boundedly stale reads execute in two phases: the first phase
7767          # negotiates a timestamp among all replicas needed to serve the
7768          # read. In the second phase, reads are executed at the negotiated
7769          # timestamp.
7770          #
7771          # As a result of the two phase execution, bounded staleness reads are
7772          # usually a little slower than comparable exact staleness
7773          # reads. However, they are typically able to return fresher
7774          # results, and are more likely to execute at the closest replica.
7775          #
7776          # Because the timestamp negotiation requires up-front knowledge of
7777          # which rows will be read, it can only be used with single-use
7778          # read-only transactions.
7779          #
7780          # See TransactionOptions.ReadOnly.max_staleness and
7781          # TransactionOptions.ReadOnly.min_read_timestamp.
7782          #
7783          # ### Old Read Timestamps and Garbage Collection
7784          #
7785          # Cloud Spanner continuously garbage collects deleted and overwritten data
7786          # in the background to reclaim storage space. This process is known
7787          # as "version GC". By default, version GC reclaims versions after they
7788          # are one hour old. Because of this, Cloud Spanner cannot perform reads
7789          # at read timestamps more than one hour in the past. This
7790          # restriction also applies to in-progress reads and/or SQL queries whose
7791          # timestamp become too old while executing. Reads and SQL queries with
7792          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
7793          #
7794          # ## Partitioned DML Transactions
7795          #
7796          # Partitioned DML transactions are used to execute DML statements with a
7797          # different execution strategy that provides different, and often better,
7798          # scalability properties for large, table-wide operations than DML in a
7799          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
7800          # should prefer using ReadWrite transactions.
7801          #
7802          # Partitioned DML partitions the keyspace and runs the DML statement on each
7803          # partition in separate, internal transactions. These transactions commit
7804          # automatically when complete, and run independently from one another.
7805          #
7806          # To reduce lock contention, this execution strategy only acquires read locks
7807          # on rows that match the WHERE clause of the statement. Additionally, the
7808          # smaller per-partition transactions hold locks for less time.
7809          #
7810          # That said, Partitioned DML is not a drop-in replacement for standard DML used
7811          # in ReadWrite transactions.
7812          #
7813          #  - The DML statement must be fully-partitionable. Specifically, the statement
7814          #    must be expressible as the union of many statements which each access only
7815          #    a single row of the table.
7816          #
7817          #  - The statement is not applied atomically to all rows of the table. Rather,
7818          #    the statement is applied atomically to partitions of the table, in
7819          #    independent transactions. Secondary index rows are updated atomically
7820          #    with the base table rows.
7821          #
7822          #  - Partitioned DML does not guarantee exactly-once execution semantics
7823          #    against a partition. The statement will be applied at least once to each
7824          #    partition. It is strongly recommended that the DML statement should be
7825          #    idempotent to avoid unexpected results. For instance, it is potentially
7826          #    dangerous to run a statement such as
7827          #    `UPDATE table SET column = column + 1` as it could be run multiple times
7828          #    against some rows.
7829          #
7830          #  - The partitions are committed automatically - there is no support for
7831          #    Commit or Rollback. If the call returns an error, or if the client issuing
7832          #    the ExecuteSql call dies, it is possible that some rows had the statement
7833          #    executed on them successfully. It is also possible that statement was
7834          #    never executed against other rows.
7835          #
7836          #  - Partitioned DML transactions may only contain the execution of a single
7837          #    DML statement via ExecuteSql or ExecuteStreamingSql.
7838          #
7839          #  - If any error is encountered during the execution of the partitioned DML
7840          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
7841          #    value that cannot be stored due to schema constraints), then the
7842          #    operation is stopped at that point and an error is returned. It is
7843          #    possible that at this point, some partitions have been committed (or even
7844          #    committed multiple times), and other partitions have not been run at all.
7845          #
7846          # Given the above, Partitioned DML is good fit for large, database-wide,
7847          # operations that are idempotent, such as deleting old rows from a very large
7848          # table.
7849        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
7850            #
7851            # Authorization to begin a read-write transaction requires
7852            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
7853            # on the `session` resource.
7854            # transaction type has no options.
7855        },
7856        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
7857            #
7858            # Authorization to begin a read-only transaction requires
7859            # `spanner.databases.beginReadOnlyTransaction` permission
7860            # on the `session` resource.
7861          "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
7862              #
7863              # This is useful for requesting fresher data than some previous
7864              # read, or data that is fresh enough to observe the effects of some
7865              # previously committed transaction whose timestamp is known.
7866              #
7867              # Note that this option can only be used in single-use transactions.
7868              #
7869              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
7870              # Example: `"2014-10-02T15:01:23.045123456Z"`.
7871          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
7872              # the Transaction message that describes the transaction.
7873          "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
7874              # seconds. Guarantees that all writes that have committed more
7875              # than the specified number of seconds ago are visible. Because
7876              # Cloud Spanner chooses the exact timestamp, this mode works even if
7877              # the client's local clock is substantially skewed from Cloud Spanner
7878              # commit timestamps.
7879              #
7880              # Useful for reading the freshest data available at a nearby
7881              # replica, while bounding the possible staleness if the local
7882              # replica has fallen behind.
7883              #
7884              # Note that this option can only be used in single-use
7885              # transactions.
7886          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
7887              # old. The timestamp is chosen soon after the read is started.
7888              #
7889              # Guarantees that all writes that have committed more than the
7890              # specified number of seconds ago are visible. Because Cloud Spanner
7891              # chooses the exact timestamp, this mode works even if the client's
7892              # local clock is substantially skewed from Cloud Spanner commit
7893              # timestamps.
7894              #
7895              # Useful for reading at nearby replicas without the distributed
7896              # timestamp negotiation overhead of `max_staleness`.
7897          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
7898              # reads at a specific timestamp are repeatable; the same read at
7899              # the same timestamp always returns the same data. If the
7900              # timestamp is in the future, the read will block until the
7901              # specified timestamp, modulo the read's deadline.
7902              #
7903              # Useful for large scale consistent reads such as mapreduces, or
7904              # for coordinating many reads against a consistent snapshot of the
7905              # data.
7906              #
7907              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
7908              # Example: `"2014-10-02T15:01:23.045123456Z"`.
7909          "strong": True or False, # Read at a timestamp where all previously committed transactions
7910              # are visible.
7911        },
7912        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
7913            #
7914            # Authorization to begin a Partitioned DML transaction requires
7915            # `spanner.databases.beginPartitionedDmlTransaction` permission
7916            # on the `session` resource.
7917        },
7918      },
7919      "id": "A String", # Execute the read or SQL query in a previously-started transaction.
7920    },
7921    "resumeToken": "A String", # If this request is resuming a previously interrupted read,
7922        # `resume_token` should be copied from the last
7923        # PartialResultSet yielded before the interruption. Doing this
7924        # enables the new read to resume where the last read left off. The
7925        # rest of the request parameters must exactly match the request
7926        # that yielded this token.
7927    "partitionToken": "A String", # If present, results will be restricted to the specified partition
7928        # previously created using PartitionRead().    There must be an exact
7929        # match for the values of fields common to this message and the
7930        # PartitionReadRequest message used to create this partition_token.
7931    "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
7932        # primary keys of the rows in table to be yielded, unless index
7933        # is present. If index is present, then key_set instead names
7934        # index keys in index.
7935        #
7936        # If the partition_token field is empty, rows are yielded
7937        # in table primary key order (if index is empty) or index key order
7938        # (if index is non-empty).  If the partition_token field is not
7939        # empty, rows will be yielded in an unspecified order.
7940        #
7941        # It is not an error for the `key_set` to name rows that do not
7942        # exist in the database. Read yields nothing for nonexistent rows.
7943        # the keys are expected to be in the same table or index. The keys need
7944        # not be sorted in any particular way.
7945        #
7946        # If the same key is specified multiple times in the set (for example
7947        # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
7948        # behaves as if the key were only specified once.
7949      "ranges": [ # A list of key ranges. See KeyRange for more information about
7950          # key range specifications.
7951        { # KeyRange represents a range of rows in a table or index.
7952            #
7953            # A range has a start key and an end key. These keys can be open or
7954            # closed, indicating if the range includes rows with that key.
7955            #
7956            # Keys are represented by lists, where the ith value in the list
7957            # corresponds to the ith component of the table or index primary key.
7958            # Individual values are encoded as described
7959            # here.
7960            #
7961            # For example, consider the following table definition:
7962            #
7963            #     CREATE TABLE UserEvents (
7964            #       UserName STRING(MAX),
7965            #       EventDate STRING(10)
7966            #     ) PRIMARY KEY(UserName, EventDate);
7967            #
7968            # The following keys name rows in this table:
7969            #
7970            #     "Bob", "2014-09-23"
7971            #
7972            # Since the `UserEvents` table's `PRIMARY KEY` clause names two
7973            # columns, each `UserEvents` key has two elements; the first is the
7974            # `UserName`, and the second is the `EventDate`.
7975            #
7976            # Key ranges with multiple components are interpreted
7977            # lexicographically by component using the table or index key's declared
7978            # sort order. For example, the following range returns all events for
7979            # user `"Bob"` that occurred in the year 2015:
7980            #
7981            #     "start_closed": ["Bob", "2015-01-01"]
7982            #     "end_closed": ["Bob", "2015-12-31"]
7983            #
7984            # Start and end keys can omit trailing key components. This affects the
7985            # inclusion and exclusion of rows that exactly match the provided key
7986            # components: if the key is closed, then rows that exactly match the
7987            # provided components are included; if the key is open, then rows
7988            # that exactly match are not included.
7989            #
7990            # For example, the following range includes all events for `"Bob"` that
7991            # occurred during and after the year 2000:
7992            #
7993            #     "start_closed": ["Bob", "2000-01-01"]
7994            #     "end_closed": ["Bob"]
7995            #
7996            # The next example retrieves all events for `"Bob"`:
7997            #
7998            #     "start_closed": ["Bob"]
7999            #     "end_closed": ["Bob"]
8000            #
8001            # To retrieve events before the year 2000:
8002            #
8003            #     "start_closed": ["Bob"]
8004            #     "end_open": ["Bob", "2000-01-01"]
8005            #
8006            # The following range includes all rows in the table:
8007            #
8008            #     "start_closed": []
8009            #     "end_closed": []
8010            #
8011            # This range returns all users whose `UserName` begins with any
8012            # character from A to C:
8013            #
8014            #     "start_closed": ["A"]
8015            #     "end_open": ["D"]
8016            #
8017            # This range returns all users whose `UserName` begins with B:
8018            #
8019            #     "start_closed": ["B"]
8020            #     "end_open": ["C"]
8021            #
8022            # Key ranges honor column sort order. For example, suppose a table is
8023            # defined as follows:
8024            #
8025            #     CREATE TABLE DescendingSortedTable {
8026            #       Key INT64,
8027            #       ...
8028            #     ) PRIMARY KEY(Key DESC);
8029            #
8030            # The following range retrieves all rows with key values between 1
8031            # and 100 inclusive:
8032            #
8033            #     "start_closed": ["100"]
8034            #     "end_closed": ["1"]
8035            #
8036            # Note that 100 is passed as the start, and 1 is passed as the end,
8037            # because `Key` is a descending column in the schema.
8038          "endOpen": [ # If the end is open, then the range excludes rows whose first
8039              # `len(end_open)` key columns exactly match `end_open`.
8040            "",
8041          ],
8042          "startOpen": [ # If the start is open, then the range excludes rows whose first
8043              # `len(start_open)` key columns exactly match `start_open`.
8044            "",
8045          ],
8046          "endClosed": [ # If the end is closed, then the range includes all rows whose
8047              # first `len(end_closed)` key columns exactly match `end_closed`.
8048            "",
8049          ],
8050          "startClosed": [ # If the start is closed, then the range includes all rows whose
8051              # first `len(start_closed)` key columns exactly match `start_closed`.
8052            "",
8053          ],
8054        },
8055      ],
8056      "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
8057          # many elements as there are columns in the primary or index key
8058          # with which this `KeySet` is used.  Individual key values are
8059          # encoded as described here.
8060        [
8061          "",
8062        ],
8063      ],
8064      "all": True or False, # For convenience `all` can be set to `true` to indicate that this
8065          # `KeySet` matches all keys in the table or index. Note that any keys
8066          # specified in `keys` or `ranges` are only yielded once.
8067    },
8068    "limit": "A String", # If greater than zero, only the first `limit` rows are yielded. If `limit`
8069        # is zero, the default is no limit. A limit cannot be specified if
8070        # `partition_token` is set.
8071    "table": "A String", # Required. The name of the table in the database to be read.
8072    "columns": [ # The columns of table to be returned for each row matching
8073        # this request.
8074      "A String",
8075    ],
8076  }
8077
8078  x__xgafv: string, V1 error format.
8079    Allowed values
8080      1 - v1 error format
8081      2 - v2 error format
8082
8083Returns:
8084  An object of the form:
8085
8086    { # Partial results from a streaming read or SQL query. Streaming reads and
8087      # SQL queries better tolerate large result sets, large rows, and large
8088      # values, but are a little trickier to consume.
8089    "resumeToken": "A String", # Streaming calls might be interrupted for a variety of reasons, such
8090        # as TCP connection loss. If this occurs, the stream of results can
8091        # be resumed by re-sending the original request and including
8092        # `resume_token`. Note that executing any other transaction in the
8093        # same session invalidates the token.
8094    "chunkedValue": True or False, # If true, then the final value in values is chunked, and must
8095        # be combined with more values from subsequent `PartialResultSet`s
8096        # to obtain a complete field value.
8097    "values": [ # A streamed result set consists of a stream of values, which might
8098        # be split into many `PartialResultSet` messages to accommodate
8099        # large rows and/or large values. Every N complete values defines a
8100        # row, where N is equal to the number of entries in
8101        # metadata.row_type.fields.
8102        #
8103        # Most values are encoded based on type as described
8104        # here.
8105        #
8106        # It is possible that the last value in values is "chunked",
8107        # meaning that the rest of the value is sent in subsequent
8108        # `PartialResultSet`(s). This is denoted by the chunked_value
8109        # field. Two or more chunked values can be merged to form a
8110        # complete value as follows:
8111        #
8112        #   * `bool/number/null`: cannot be chunked
8113        #   * `string`: concatenate the strings
8114        #   * `list`: concatenate the lists. If the last element in a list is a
8115        #     `string`, `list`, or `object`, merge it with the first element in
8116        #     the next list by applying these rules recursively.
8117        #   * `object`: concatenate the (field name, field value) pairs. If a
8118        #     field name is duplicated, then apply these rules recursively
8119        #     to merge the field values.
8120        #
8121        # Some examples of merging:
8122        #
8123        #     # Strings are concatenated.
8124        #     "foo", "bar" => "foobar"
8125        #
8126        #     # Lists of non-strings are concatenated.
8127        #     [2, 3], [4] => [2, 3, 4]
8128        #
8129        #     # Lists are concatenated, but the last and first elements are merged
8130        #     # because they are strings.
8131        #     ["a", "b"], ["c", "d"] => ["a", "bc", "d"]
8132        #
8133        #     # Lists are concatenated, but the last and first elements are merged
8134        #     # because they are lists. Recursively, the last and first elements
8135        #     # of the inner lists are merged because they are strings.
8136        #     ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"]
8137        #
8138        #     # Non-overlapping object fields are combined.
8139        #     {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"}
8140        #
8141        #     # Overlapping object fields are merged.
8142        #     {"a": "1"}, {"a": "2"} => {"a": "12"}
8143        #
8144        #     # Examples of merging objects containing lists of strings.
8145        #     {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]}
8146        #
8147        # For a more complete example, suppose a streaming SQL query is
8148        # yielding a result set whose rows contain a single string
8149        # field. The following `PartialResultSet`s might be yielded:
8150        #
8151        #     {
8152        #       "metadata": { ... }
8153        #       "values": ["Hello", "W"]
8154        #       "chunked_value": true
8155        #       "resume_token": "Af65..."
8156        #     }
8157        #     {
8158        #       "values": ["orl"]
8159        #       "chunked_value": true
8160        #       "resume_token": "Bqp2..."
8161        #     }
8162        #     {
8163        #       "values": ["d"]
8164        #       "resume_token": "Zx1B..."
8165        #     }
8166        #
8167        # This sequence of `PartialResultSet`s encodes two rows, one
8168        # containing the field value `"Hello"`, and a second containing the
8169        # field value `"World" = "W" + "orl" + "d"`.
8170      "",
8171    ],
8172    "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the statement that produced this
8173        # streaming result set. These can be requested by setting
8174        # ExecuteSqlRequest.query_mode and are sent
8175        # only once with the last response in the stream.
8176        # This field will also be present in the last response for DML
8177        # statements.
8178      "rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
8179          # returns a lower bound of the rows modified.
8180      "rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
8181      "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
8182        "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
8183            # with the plan root. Each PlanNode's `id` corresponds to its index in
8184            # `plan_nodes`.
8185          { # Node information for nodes appearing in a QueryPlan.plan_nodes.
8186            "index": 42, # The `PlanNode`'s index in node list.
8187            "kind": "A String", # Used to determine the type of node. May be needed for visualizing
8188                # different kinds of nodes differently. For example, If the node is a
8189                # SCALAR node, it will have a condensed representation
8190                # which can be used to directly embed a description of the node in its
8191                # parent.
8192            "displayName": "A String", # The display name for the node.
8193            "executionStats": { # The execution statistics associated with the node, contained in a group of
8194                # key-value pairs. Only present if the plan was returned as a result of a
8195                # profile query. For example, number of executions, number of rows/time per
8196                # execution etc.
8197              "a_key": "", # Properties of the object.
8198            },
8199            "childLinks": [ # List of child node `index`es and their relationship to this parent.
8200              { # Metadata associated with a parent-child relationship appearing in a
8201                  # PlanNode.
8202                "variable": "A String", # Only present if the child node is SCALAR and corresponds
8203                    # to an output variable of the parent node. The field carries the name of
8204                    # the output variable.
8205                    # For example, a `TableScan` operator that reads rows from a table will
8206                    # have child links to the `SCALAR` nodes representing the output variables
8207                    # created for each column that is read by the operator. The corresponding
8208                    # `variable` fields will be set to the variable names assigned to the
8209                    # columns.
8210                "childIndex": 42, # The node to which the link points.
8211                "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
8212                    # distinguish between the build child and the probe child, or in the case
8213                    # of the child being an output variable, to represent the tag associated
8214                    # with the output variable.
8215              },
8216            ],
8217            "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
8218                # `SCALAR` PlanNode(s).
8219              "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
8220                  # where the `description` string of this node references a `SCALAR`
8221                  # subquery contained in the expression subtree rooted at this node. The
8222                  # referenced `SCALAR` subquery may not necessarily be a direct child of
8223                  # this node.
8224                "a_key": 42,
8225              },
8226              "description": "A String", # A string representation of the expression subtree rooted at this node.
8227            },
8228            "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
8229                # For example, a Parameter Reference node could have the following
8230                # information in its metadata:
8231                #
8232                #     {
8233                #       "parameter_reference": "param1",
8234                #       "parameter_type": "array"
8235                #     }
8236              "a_key": "", # Properties of the object.
8237            },
8238          },
8239        ],
8240      },
8241      "queryStats": { # Aggregated statistics from the execution of the query. Only present when
8242          # the query is profiled. For example, a query could return the statistics as
8243          # follows:
8244          #
8245          #     {
8246          #       "rows_returned": "3",
8247          #       "elapsed_time": "1.22 secs",
8248          #       "cpu_time": "1.19 secs"
8249          #     }
8250        "a_key": "", # Properties of the object.
8251      },
8252    },
8253    "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
8254        # Only present in the first response.
8255      "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
8256          # set.  For example, a SQL query like `"SELECT UserId, UserName FROM
8257          # Users"` could return a `row_type` value like:
8258          #
8259          #     "fields": [
8260          #       { "name": "UserId", "type": { "code": "INT64" } },
8261          #       { "name": "UserName", "type": { "code": "STRING" } },
8262          #     ]
8263        "fields": [ # The list of fields that make up this struct. Order is
8264            # significant, because values of this struct type are represented as
8265            # lists, where the order of field values matches the order of
8266            # fields in the StructType. In turn, the order of fields
8267            # matches the order of columns in a read request, or the order of
8268            # fields in the `SELECT` clause of a query.
8269          { # Message representing a single field of a struct.
8270            "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
8271                # table cell or returned from an SQL query.
8272              "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
8273                  # provides type information for the struct's fields.
8274              "code": "A String", # Required. The TypeCode for this type.
8275              "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
8276                  # is the type of the array elements.
8277            },
8278            "name": "A String", # The name of the field. For reads, this is the column name. For
8279                # SQL queries, it is the column alias (e.g., `"Word"` in the
8280                # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
8281                # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
8282                # columns might have an empty name (e.g., !"SELECT
8283                # UPPER(ColName)"`). Note that a query result can contain
8284                # multiple fields with the same name.
8285          },
8286        ],
8287      },
8288      "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
8289          # information about the new transaction is yielded here.
8290        "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
8291            # for the transaction. Not returned by default: see
8292            # TransactionOptions.ReadOnly.return_read_timestamp.
8293            #
8294            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
8295            # Example: `"2014-10-02T15:01:23.045123456Z"`.
8296        "id": "A String", # `id` may be used to identify the transaction in subsequent
8297            # Read,
8298            # ExecuteSql,
8299            # Commit, or
8300            # Rollback calls.
8301            #
8302            # Single-use read-only transactions do not have IDs, because
8303            # single-use transactions do not support multiple requests.
8304      },
8305    },
8306  }</pre>
8307</div>
8308
8309</body></html>