• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5  margin: 0;
6  padding: 0;
7  border: 0;
8  font-weight: inherit;
9  font-style: inherit;
10  font-size: 100%;
11  font-family: inherit;
12  vertical-align: baseline;
13}
14
15body {
16  font-size: 13px;
17  padding: 1em;
18}
19
20h1 {
21  font-size: 26px;
22  margin-bottom: 1em;
23}
24
25h2 {
26  font-size: 24px;
27  margin-bottom: 1em;
28}
29
30h3 {
31  font-size: 20px;
32  margin-bottom: 1em;
33  margin-top: 1em;
34}
35
36pre, code {
37  line-height: 1.5;
38  font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42  margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46  font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50  border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54  margin-top: 0.5em;
55}
56
57.firstline {
58  margin-left: 2 em;
59}
60
61.method  {
62  margin-top: 1em;
63  border: solid 1px #CCC;
64  padding: 1em;
65  background: #EEE;
66}
67
68.details {
69  font-weight: bold;
70  font-size: 14px;
71}
72
73</style>
74
75<h1><a href="speech_v1beta1.html">Google Cloud Speech API</a> . <a href="speech_v1beta1.speech.html">speech</a></h1>
76<h2>Instance Methods</h2>
77<p class="toc_element">
78  <code><a href="#asyncrecognize">asyncrecognize(body, x__xgafv=None)</a></code></p>
79<p class="firstline">Performs asynchronous speech recognition: receive results via the</p>
80<p class="toc_element">
81  <code><a href="#syncrecognize">syncrecognize(body, x__xgafv=None)</a></code></p>
82<p class="firstline">Performs synchronous speech recognition: receive results after all audio</p>
83<h3>Method Details</h3>
84<div class="method">
85    <code class="details" id="asyncrecognize">asyncrecognize(body, x__xgafv=None)</code>
86  <pre>Performs asynchronous speech recognition: receive results via the
87[google.longrunning.Operations]
88(/speech/reference/rest/v1beta1/operations#Operation)
89interface. Returns either an
90`Operation.error` or an `Operation.response` which contains
91an `AsyncRecognizeResponse` message.
92
93Args:
94  body: object, The request body. (required)
95    The object takes the form of:
96
97{ # The top-level message sent by the client for the `AsyncRecognize` method.
98    "audio": { # Contains audio data in the encoding specified in the `RecognitionConfig`. # *Required* The audio data to be recognized.
99        # Either `content` or `uri` must be supplied. Supplying both or neither
100        # returns google.rpc.Code.INVALID_ARGUMENT. See
101        # [audio limits](https://cloud.google.com/speech/limits#content).
102      "content": "A String", # The audio data bytes encoded as specified in
103          # `RecognitionConfig`. Note: as with all bytes fields, protobuffers use a
104          # pure binary representation, whereas JSON representations use base64.
105      "uri": "A String", # URI that points to a file that contains audio data bytes as specified in
106          # `RecognitionConfig`. Currently, only Google Cloud Storage URIs are
107          # supported, which must be specified in the following format:
108          # `gs://bucket_name/object_name` (other URI formats return
109          # google.rpc.Code.INVALID_ARGUMENT). For more information, see
110          # [Request URIs](https://cloud.google.com/storage/docs/reference-uris).
111    },
112    "config": { # Provides information to the recognizer that specifies how to process the # *Required* Provides information to the recognizer that specifies how to
113        # process the request.
114        # request.
115      "languageCode": "A String", # *Optional* The language of the supplied audio as a BCP-47 language tag.
116          # Example: "en-GB"  https://www.rfc-editor.org/rfc/bcp/bcp47.txt
117          # If omitted, defaults to "en-US". See
118          # [Language Support](https://cloud.google.com/speech/docs/languages)
119          # for a list of the currently supported language codes.
120      "speechContext": { # Provides "hints" to the speech recognizer to favor specific words and phrases # *Optional* A means to provide context to assist the speech recognition.
121          # in the results.
122        "phrases": [ # *Optional* A list of strings containing words and phrases "hints" so that
123            # the speech recognition is more likely to recognize them. This can be used
124            # to improve the accuracy for specific words and phrases, for example, if
125            # specific commands are typically spoken by the user. This can also be used
126            # to add additional words to the vocabulary of the recognizer. See
127            # [usage limits](https://cloud.google.com/speech/limits#content).
128          "A String",
129        ],
130      },
131      "encoding": "A String", # *Required* Encoding of audio data sent in all `RecognitionAudio` messages.
132      "maxAlternatives": 42, # *Optional* Maximum number of recognition hypotheses to be returned.
133          # Specifically, the maximum number of `SpeechRecognitionAlternative` messages
134          # within each `SpeechRecognitionResult`.
135          # The server may return fewer than `max_alternatives`.
136          # Valid values are `0`-`30`. A value of `0` or `1` will return a maximum of
137          # one. If omitted, will return a maximum of one.
138      "profanityFilter": True or False, # *Optional* If set to `true`, the server will attempt to filter out
139          # profanities, replacing all but the initial character in each filtered word
140          # with asterisks, e.g. "f***". If set to `false` or omitted, profanities
141          # won't be filtered out.
142      "sampleRate": 42, # *Required* Sample rate in Hertz of the audio data sent in all
143          # `RecognitionAudio` messages. Valid values are: 8000-48000.
144          # 16000 is optimal. For best results, set the sampling rate of the audio
145          # source to 16000 Hz. If that's not possible, use the native sample rate of
146          # the audio source (instead of re-sampling).
147    },
148  }
149
150  x__xgafv: string, V1 error format.
151    Allowed values
152      1 - v1 error format
153      2 - v2 error format
154
155Returns:
156  An object of the form:
157
158    { # This resource represents a long-running operation that is the result of a
159      # network API call.
160    "metadata": { # Service-specific metadata associated with the operation.  It typically
161        # contains progress information and common metadata such as create time.
162        # Some services might not provide such metadata.  Any method that returns a
163        # long-running operation should document the metadata type, if any.
164      "a_key": "", # Properties of the object. Contains field @type with type URL.
165    },
166    "done": True or False, # If the value is `false`, it means the operation is still in progress.
167        # If true, the operation is completed, and either `error` or `response` is
168        # available.
169    "response": { # The normal response of the operation in case of success.  If the original
170        # method returns no data on success, such as `Delete`, the response is
171        # `google.protobuf.Empty`.  If the original method is standard
172        # `Get`/`Create`/`Update`, the response should be the resource.  For other
173        # methods, the response should have the type `XxxResponse`, where `Xxx`
174        # is the original method name.  For example, if the original method name
175        # is `TakeSnapshot()`, the inferred response type is
176        # `TakeSnapshotResponse`.
177      "a_key": "", # Properties of the object. Contains field @type with type URL.
178    },
179    "name": "A String", # The server-assigned name, which is only unique within the same service that
180        # originally returns it. If you use the default HTTP mapping, the
181        # `name` should have the format of `operations/some/unique/name`.
182    "error": { # The `Status` type defines a logical error model that is suitable for different # The error result of the operation in case of failure or cancellation.
183        # programming environments, including REST APIs and RPC APIs. It is used by
184        # [gRPC](https://github.com/grpc). The error model is designed to be:
185        #
186        # - Simple to use and understand for most users
187        # - Flexible enough to meet unexpected needs
188        #
189        # # Overview
190        #
191        # The `Status` message contains three pieces of data: error code, error message,
192        # and error details. The error code should be an enum value of
193        # google.rpc.Code, but it may accept additional error codes if needed.  The
194        # error message should be a developer-facing English message that helps
195        # developers *understand* and *resolve* the error. If a localized user-facing
196        # error message is needed, put the localized message in the error details or
197        # localize it in the client. The optional error details may contain arbitrary
198        # information about the error. There is a predefined set of error detail types
199        # in the package `google.rpc` that can be used for common error conditions.
200        #
201        # # Language mapping
202        #
203        # The `Status` message is the logical representation of the error model, but it
204        # is not necessarily the actual wire format. When the `Status` message is
205        # exposed in different client libraries and different wire protocols, it can be
206        # mapped differently. For example, it will likely be mapped to some exceptions
207        # in Java, but more likely mapped to some error codes in C.
208        #
209        # # Other uses
210        #
211        # The error model and the `Status` message can be used in a variety of
212        # environments, either with or without APIs, to provide a
213        # consistent developer experience across different environments.
214        #
215        # Example uses of this error model include:
216        #
217        # - Partial errors. If a service needs to return partial errors to the client,
218        #     it may embed the `Status` in the normal response to indicate the partial
219        #     errors.
220        #
221        # - Workflow errors. A typical workflow has multiple steps. Each step may
222        #     have a `Status` message for error reporting.
223        #
224        # - Batch operations. If a client uses batch request and batch response, the
225        #     `Status` message should be used directly inside batch response, one for
226        #     each error sub-response.
227        #
228        # - Asynchronous operations. If an API call embeds asynchronous operation
229        #     results in its response, the status of those operations should be
230        #     represented directly using the `Status` message.
231        #
232        # - Logging. If some API errors are stored in logs, the message `Status` could
233        #     be used directly after any stripping needed for security/privacy reasons.
234      "message": "A String", # A developer-facing error message, which should be in English. Any
235          # user-facing error message should be localized and sent in the
236          # google.rpc.Status.details field, or localized by the client.
237      "code": 42, # The status code, which should be an enum value of google.rpc.Code.
238      "details": [ # A list of messages that carry the error details.  There will be a
239          # common set of message types for APIs to use.
240        {
241          "a_key": "", # Properties of the object. Contains field @type with type URL.
242        },
243      ],
244    },
245  }</pre>
246</div>
247
248<div class="method">
249    <code class="details" id="syncrecognize">syncrecognize(body, x__xgafv=None)</code>
250  <pre>Performs synchronous speech recognition: receive results after all audio
251has been sent and processed.
252
253Args:
254  body: object, The request body. (required)
255    The object takes the form of:
256
257{ # The top-level message sent by the client for the `SyncRecognize` method.
258    "audio": { # Contains audio data in the encoding specified in the `RecognitionConfig`. # *Required* The audio data to be recognized.
259        # Either `content` or `uri` must be supplied. Supplying both or neither
260        # returns google.rpc.Code.INVALID_ARGUMENT. See
261        # [audio limits](https://cloud.google.com/speech/limits#content).
262      "content": "A String", # The audio data bytes encoded as specified in
263          # `RecognitionConfig`. Note: as with all bytes fields, protobuffers use a
264          # pure binary representation, whereas JSON representations use base64.
265      "uri": "A String", # URI that points to a file that contains audio data bytes as specified in
266          # `RecognitionConfig`. Currently, only Google Cloud Storage URIs are
267          # supported, which must be specified in the following format:
268          # `gs://bucket_name/object_name` (other URI formats return
269          # google.rpc.Code.INVALID_ARGUMENT). For more information, see
270          # [Request URIs](https://cloud.google.com/storage/docs/reference-uris).
271    },
272    "config": { # Provides information to the recognizer that specifies how to process the # *Required* Provides information to the recognizer that specifies how to
273        # process the request.
274        # request.
275      "languageCode": "A String", # *Optional* The language of the supplied audio as a BCP-47 language tag.
276          # Example: "en-GB"  https://www.rfc-editor.org/rfc/bcp/bcp47.txt
277          # If omitted, defaults to "en-US". See
278          # [Language Support](https://cloud.google.com/speech/docs/languages)
279          # for a list of the currently supported language codes.
280      "speechContext": { # Provides "hints" to the speech recognizer to favor specific words and phrases # *Optional* A means to provide context to assist the speech recognition.
281          # in the results.
282        "phrases": [ # *Optional* A list of strings containing words and phrases "hints" so that
283            # the speech recognition is more likely to recognize them. This can be used
284            # to improve the accuracy for specific words and phrases, for example, if
285            # specific commands are typically spoken by the user. This can also be used
286            # to add additional words to the vocabulary of the recognizer. See
287            # [usage limits](https://cloud.google.com/speech/limits#content).
288          "A String",
289        ],
290      },
291      "encoding": "A String", # *Required* Encoding of audio data sent in all `RecognitionAudio` messages.
292      "maxAlternatives": 42, # *Optional* Maximum number of recognition hypotheses to be returned.
293          # Specifically, the maximum number of `SpeechRecognitionAlternative` messages
294          # within each `SpeechRecognitionResult`.
295          # The server may return fewer than `max_alternatives`.
296          # Valid values are `0`-`30`. A value of `0` or `1` will return a maximum of
297          # one. If omitted, will return a maximum of one.
298      "profanityFilter": True or False, # *Optional* If set to `true`, the server will attempt to filter out
299          # profanities, replacing all but the initial character in each filtered word
300          # with asterisks, e.g. "f***". If set to `false` or omitted, profanities
301          # won't be filtered out.
302      "sampleRate": 42, # *Required* Sample rate in Hertz of the audio data sent in all
303          # `RecognitionAudio` messages. Valid values are: 8000-48000.
304          # 16000 is optimal. For best results, set the sampling rate of the audio
305          # source to 16000 Hz. If that's not possible, use the native sample rate of
306          # the audio source (instead of re-sampling).
307    },
308  }
309
310  x__xgafv: string, V1 error format.
311    Allowed values
312      1 - v1 error format
313      2 - v2 error format
314
315Returns:
316  An object of the form:
317
318    { # The only message returned to the client by `SyncRecognize`. method. It
319      # contains the result as zero or more sequential `SpeechRecognitionResult`
320      # messages.
321    "results": [ # *Output-only* Sequential list of transcription results corresponding to
322        # sequential portions of audio.
323      { # A speech recognition result corresponding to a portion of the audio.
324        "alternatives": [ # *Output-only* May contain one or more recognition hypotheses (up to the
325            # maximum specified in `max_alternatives`).
326          { # Alternative hypotheses (a.k.a. n-best list).
327            "confidence": 3.14, # *Output-only* The confidence estimate between 0.0 and 1.0. A higher number
328                # indicates an estimated greater likelihood that the recognized words are
329                # correct. This field is typically provided only for the top hypothesis, and
330                # only for `is_final=true` results. Clients should not rely on the
331                # `confidence` field as it is not guaranteed to be accurate, or even set, in
332                # any of the results.
333                # The default of 0.0 is a sentinel value indicating `confidence` was not set.
334            "transcript": "A String", # *Output-only* Transcript text representing the words that the user spoke.
335          },
336        ],
337      },
338    ],
339  }</pre>
340</div>
341
342</body></html>