• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5  margin: 0;
6  padding: 0;
7  border: 0;
8  font-weight: inherit;
9  font-style: inherit;
10  font-size: 100%;
11  font-family: inherit;
12  vertical-align: baseline;
13}
14
15body {
16  font-size: 13px;
17  padding: 1em;
18}
19
20h1 {
21  font-size: 26px;
22  margin-bottom: 1em;
23}
24
25h2 {
26  font-size: 24px;
27  margin-bottom: 1em;
28}
29
30h3 {
31  font-size: 20px;
32  margin-bottom: 1em;
33  margin-top: 1em;
34}
35
36pre, code {
37  line-height: 1.5;
38  font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42  margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46  font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50  border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54  margin-top: 0.5em;
55}
56
57.firstline {
58  margin-left: 2 em;
59}
60
61.method  {
62  margin-top: 1em;
63  border: solid 1px #CCC;
64  padding: 1em;
65  background: #EEE;
66}
67
68.details {
69  font-weight: bold;
70  font-size: 14px;
71}
72
73</style>
74
75<h1><a href="dialogflow_v2beta1.html">Dialogflow API</a> . <a href="dialogflow_v2beta1.projects.html">projects</a> . <a href="dialogflow_v2beta1.projects.agent.html">agent</a> . <a href="dialogflow_v2beta1.projects.agent.environments.html">environments</a> . <a href="dialogflow_v2beta1.projects.agent.environments.users.html">users</a> . <a href="dialogflow_v2beta1.projects.agent.environments.users.sessions.html">sessions</a></h1>
76<h2>Instance Methods</h2>
77<p class="toc_element">
78  <code><a href="dialogflow_v2beta1.projects.agent.environments.users.sessions.contexts.html">contexts()</a></code>
79</p>
80<p class="firstline">Returns the contexts Resource.</p>
81
82<p class="toc_element">
83  <code><a href="dialogflow_v2beta1.projects.agent.environments.users.sessions.entityTypes.html">entityTypes()</a></code>
84</p>
85<p class="firstline">Returns the entityTypes Resource.</p>
86
87<p class="toc_element">
88  <code><a href="#deleteContexts">deleteContexts(parent, x__xgafv=None)</a></code></p>
89<p class="firstline">Deletes all active contexts in the specified session.</p>
90<p class="toc_element">
91  <code><a href="#detectIntent">detectIntent(session, body, x__xgafv=None)</a></code></p>
92<p class="firstline">Processes a natural language query and returns structured, actionable data</p>
93<h3>Method Details</h3>
94<div class="method">
95    <code class="details" id="deleteContexts">deleteContexts(parent, x__xgafv=None)</code>
96  <pre>Deletes all active contexts in the specified session.
97
98Args:
99  parent: string, Required. The name of the session to delete all contexts from. Format:
100`projects/<Project ID>/agent/sessions/<Session ID>` or `projects/<Project
101ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session
102ID>`. If `Environment ID` is not specified we assume default 'draft'
103environment. If `User ID` is not specified, we assume default '-' user. (required)
104  x__xgafv: string, V1 error format.
105    Allowed values
106      1 - v1 error format
107      2 - v2 error format
108
109Returns:
110  An object of the form:
111
112    { # A generic empty message that you can re-use to avoid defining duplicated
113      # empty messages in your APIs. A typical example is to use it as the request
114      # or the response type of an API method. For instance:
115      #
116      #     service Foo {
117      #       rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
118      #     }
119      #
120      # The JSON representation for `Empty` is empty JSON object `{}`.
121  }</pre>
122</div>
123
124<div class="method">
125    <code class="details" id="detectIntent">detectIntent(session, body, x__xgafv=None)</code>
126  <pre>Processes a natural language query and returns structured, actionable data
127as a result. This method is not idempotent, because it may cause contexts
128and session entity types to be updated, which in turn might affect
129results of future queries.
130
131Args:
132  session: string, Required. The name of the session this query is sent to. Format:
133`projects/<Project ID>/agent/sessions/<Session ID>`, or
134`projects/<Project ID>/agent/environments/<Environment ID>/users/<User
135ID>/sessions/<Session ID>`. If `Environment ID` is not specified, we assume
136default 'draft' environment. If `User ID` is not specified, we are using
137"-". It’s up to the API caller to choose an appropriate `Session ID` and
138`User Id`. They can be a random numbers or some type of user and session
139identifiers (preferably hashed). The length of the `Session ID` and
140`User ID` must not exceed 36 characters. (required)
141  body: object, The request body. (required)
142    The object takes the form of:
143
144{ # The request to detect user's intent.
145    "outputAudioConfig": { # Instructs the speech synthesizer how to generate the output audio content. # Optional. Instructs the speech synthesizer how to generate the output
146        # audio. If this field is not set and agent-level speech synthesizer is not
147        # configured, no output audio is generated.
148      "sampleRateHertz": 42, # Optional. The synthesis sample rate (in hertz) for this audio. If not
149          # provided, then the synthesizer will use the default sample rate based on
150          # the audio encoding. If this is different from the voice's natural sample
151          # rate, then the synthesizer will honor this request by converting to the
152          # desired sample rate (which might result in worse audio quality).
153      "audioEncoding": "A String", # Required. Audio encoding of the synthesized audio content.
154      "synthesizeSpeechConfig": { # Configuration of how speech should be synthesized. # Optional. Configuration of how speech should be synthesized.
155        "effectsProfileId": [ # Optional. An identifier which selects 'audio effects' profiles that are
156            # applied on (post synthesized) text to speech. Effects are applied on top of
157            # each other in the order they are given.
158          "A String",
159        ],
160        "voice": { # Description of which voice to use for speech synthesis. # Optional. The desired voice of the synthesized audio.
161          "ssmlGender": "A String", # Optional. The preferred gender of the voice. If not set, the service will
162              # choose a voice based on the other parameters such as language_code and
163              # name. Note that this is only a preference, not requirement. If a
164              # voice of the appropriate gender is not available, the synthesizer should
165              # substitute a voice with a different gender rather than failing the request.
166          "name": "A String", # Optional. The name of the voice. If not set, the service will choose a
167              # voice based on the other parameters such as language_code and gender.
168        },
169        "speakingRate": 3.14, # Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal
170            # native speed supported by the specific voice. 2.0 is twice as fast, and
171            # 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any
172            # other values < 0.25 or > 4.0 will return an error.
173        "volumeGainDb": 3.14, # Optional. Volume gain (in dB) of the normal native volume supported by the
174            # specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of
175            # 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB)
176            # will play at approximately half the amplitude of the normal native signal
177            # amplitude. A value of +6.0 (dB) will play at approximately twice the
178            # amplitude of the normal native signal amplitude. We strongly recommend not
179            # to exceed +10 (dB) as there's usually no effective increase in loudness for
180            # any value greater than that.
181        "pitch": 3.14, # Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20
182            # semitones from the original pitch. -20 means decrease 20 semitones from the
183            # original pitch.
184      },
185    },
186    "inputAudio": "A String", # Optional. The natural language speech audio to be processed. This field
187        # should be populated iff `query_input` is set to an input audio config.
188        # A single request can contain up to 1 minute of speech audio data.
189    "queryInput": { # Represents the query input. It can contain either: # Required. The input specification. It can be set to:
190        #
191        # 1.  an audio config
192        #     which instructs the speech recognizer how to process the speech audio,
193        #
194        # 2.  a conversational query in the form of text, or
195        #
196        # 3.  an event that specifies which intent to trigger.
197        #
198        # 1.  An audio config which
199        #     instructs the speech recognizer how to process the speech audio.
200        #
201        # 2.  A conversational query in the form of text.
202        #
203        # 3.  An event that specifies which intent to trigger.
204      "text": { # Represents the natural language text to be processed. # The natural language text to be processed.
205        "text": "A String", # Required. The UTF-8 encoded natural language text to be processed.
206            # Text length must not exceed 256 characters.
207        "languageCode": "A String", # Required. The language of this conversational query. See [Language
208            # Support](https://cloud.google.com/dialogflow-enterprise/docs/reference/language)
209            # for a list of the currently supported language codes. Note that queries in
210            # the same session do not necessarily need to specify the same language.
211      },
212      "event": { # Events allow for matching intents by event name instead of the natural # The event to be processed.
213          # language input. For instance, input `<event: { name: "welcome_event",
214          # parameters: { name: "Sam" } }>` can trigger a personalized welcome response.
215          # The parameter `name` may be used by the agent in the response:
216          # `"Hello #welcome_event.name! What can I do for you today?"`.
217        "languageCode": "A String", # Required. The language of this query. See [Language
218            # Support](https://cloud.google.com/dialogflow-enterprise/docs/reference/language)
219            # for a list of the currently supported language codes. Note that queries in
220            # the same session do not necessarily need to specify the same language.
221        "name": "A String", # Required. The unique identifier of the event.
222        "parameters": { # Optional. The collection of parameters associated with the event.
223          "a_key": "", # Properties of the object.
224        },
225      },
226      "audioConfig": { # Instructs the speech recognizer on how to process the audio content. # Instructs the speech recognizer how to process the speech audio.
227        "languageCode": "A String", # Required. The language of the supplied audio. Dialogflow does not do
228            # translations. See [Language
229            # Support](https://cloud.google.com/dialogflow-enterprise/docs/reference/language)
230            # for a list of the currently supported language codes. Note that queries in
231            # the same session do not necessarily need to specify the same language.
232        "audioEncoding": "A String", # Required. Audio encoding of the audio content to process.
233        "phraseHints": [ # Optional. A list of strings containing words and phrases that the speech
234            # recognizer should recognize with higher likelihood.
235            #
236            # See [the Cloud Speech
237            # documentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints)
238            # for more details.
239          "A String",
240        ],
241        "enableWordInfo": True or False, # Optional. If `true`, Dialogflow returns SpeechWordInfo in
242            # StreamingRecognitionResult with information about the recognized speech
243            # words, e.g. start and end time offsets. If false or unspecified, Speech
244            # doesn't return any word-level information.
245        "sampleRateHertz": 42, # Required. Sample rate (in Hertz) of the audio content sent in the query.
246            # Refer to
247            # [Cloud Speech API
248            # documentation](https://cloud.google.com/speech-to-text/docs/basics) for
249            # more details.
250        "modelVariant": "A String", # Optional. Which variant of the Speech model to use.
251        "model": "A String", # Optional. Which Speech model to select for the given request. Select the
252            # model best suited to your domain to get best results. If a model is not
253            # explicitly specified, then we auto-select a model based on the parameters
254            # in the InputAudioConfig.
255            # If enhanced speech model is enabled for the agent and an enhanced
256            # version of the specified model for the language does not exist, then the
257            # speech is recognized using the standard version of the specified model.
258            # Refer to
259            # [Cloud Speech API
260            # documentation](https://cloud.google.com/speech-to-text/docs/basics#select-model)
261            # for more details.
262      },
263    },
264    "queryParams": { # Represents the parameters of the conversational query. # Optional. The parameters of this query.
265      "geoLocation": { # An object representing a latitude/longitude pair. This is expressed as a pair # Optional. The geo location of this conversational query.
266          # of doubles representing degrees latitude and degrees longitude. Unless
267          # specified otherwise, this must conform to the
268          # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84
269          # standard</a>. Values must be within normalized ranges.
270        "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
271        "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
272      },
273      "contexts": [ # Optional. The collection of contexts to be activated before this query is
274          # executed.
275        { # Represents a context.
276          "parameters": { # Optional. The collection of parameters associated with this context.
277              # Refer to [this
278              # doc](https://cloud.google.com/dialogflow-enterprise/docs/intents-actions-parameters)
279              # for syntax.
280            "a_key": "", # Properties of the object.
281          },
282          "name": "A String", # Required. The unique identifier of the context. Format:
283              # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
284              # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
285              # ID>/sessions/<Session ID>/contexts/<Context ID>`.
286              #
287              # The `Context ID` is always converted to lowercase, may only contain
288              # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
289              #
290              # If `Environment ID` is not specified, we assume default 'draft'
291              # environment. If `User ID` is not specified, we assume default '-' user.
292          "lifespanCount": 42, # Optional. The number of conversational query requests after which the
293              # context expires. If set to `0` (the default) the context expires
294              # immediately. Contexts expire automatically after 20 minutes if there
295              # are no matching queries.
296        },
297      ],
298      "knowledgeBaseNames": [ # Optional. KnowledgeBases to get alternative results from. If not set, the
299          # KnowledgeBases enabled in the agent (through UI) will be used.
300          # Format:  `projects/<Project ID>/knowledgeBases/<Knowledge Base ID>`.
301        "A String",
302      ],
303      "sentimentAnalysisRequestConfig": { # Configures the types of sentiment analysis to perform. # Optional. Configures the type of sentiment analysis to perform. If not
304          # provided, sentiment analysis is not performed.
305          # Note: Sentiment Analysis is only currently available for Enterprise Edition
306          # agents.
307        "analyzeQueryTextSentiment": True or False, # Optional. Instructs the service to perform sentiment analysis on
308            # `query_text`. If not provided, sentiment analysis is not performed on
309            # `query_text`.
310      },
311      "resetContexts": True or False, # Optional. Specifies whether to delete all contexts in the current session
312          # before the new ones are activated.
313      "timeZone": "A String", # Optional. The time zone of this conversational query from the
314          # [time zone database](https://www.iana.org/time-zones), e.g.,
315          # America/New_York, Europe/Paris. If not provided, the time zone specified in
316          # agent settings is used.
317      "payload": { # Optional. This field can be used to pass custom data into the webhook
318          # associated with the agent. Arbitrary JSON objects are supported.
319        "a_key": "", # Properties of the object.
320      },
321      "sessionEntityTypes": [ # Optional. Additional session entity types to replace or extend developer
322          # entity types with. The entity synonyms apply to all languages and persist
323          # for the session of this query.
324        { # Represents a session entity type.
325            #
326            # Extends or replaces a developer entity type at the user session level (we
327            # refer to the entity types defined at the agent level as "developer entity
328            # types").
329            #
330            # Note: session entity types apply to all queries, regardless of the language.
331          "entities": [ # Required. The collection of entities associated with this session entity
332              # type.
333            { # An **entity entry** for an associated entity type.
334              "synonyms": [ # Required. A collection of value synonyms. For example, if the entity type
335                  # is *vegetable*, and `value` is *scallions*, a synonym could be *green
336                  # onions*.
337                  #
338                  # For `KIND_LIST` entity types:
339                  #
340                  # *   This collection must contain exactly one synonym equal to `value`.
341                "A String",
342              ],
343              "value": "A String", # Required. The primary value associated with this entity entry.
344                  # For example, if the entity type is *vegetable*, the value could be
345                  # *scallions*.
346                  #
347                  # For `KIND_MAP` entity types:
348                  #
349                  # *   A canonical value to be used in place of synonyms.
350                  #
351                  # For `KIND_LIST` entity types:
352                  #
353                  # *   A string that can contain references to other entity types (with or
354                  #     without aliases).
355            },
356          ],
357          "name": "A String", # Required. The unique identifier of this session entity type. Format:
358              # `projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type
359              # Display Name>`, or
360              # `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
361              # ID>/sessions/<Session ID>/entityTypes/<Entity Type Display Name>`.
362              # If `Environment ID` is not specified, we assume default 'draft'
363              # environment. If `User ID` is not specified, we assume default '-' user.
364              #
365              # `<Entity Type Display Name>` must be the display name of an existing entity
366              # type in the same agent that will be overridden or supplemented.
367          "entityOverrideMode": "A String", # Required. Indicates whether the additional data should override or
368              # supplement the developer entity type definition.
369        },
370      ],
371    },
372  }
373
374  x__xgafv: string, V1 error format.
375    Allowed values
376      1 - v1 error format
377      2 - v2 error format
378
379Returns:
380  An object of the form:
381
382    { # The message returned from the DetectIntent method.
383    "outputAudioConfig": { # Instructs the speech synthesizer how to generate the output audio content. # The config used by the speech synthesizer to generate the output audio.
384      "sampleRateHertz": 42, # Optional. The synthesis sample rate (in hertz) for this audio. If not
385          # provided, then the synthesizer will use the default sample rate based on
386          # the audio encoding. If this is different from the voice's natural sample
387          # rate, then the synthesizer will honor this request by converting to the
388          # desired sample rate (which might result in worse audio quality).
389      "audioEncoding": "A String", # Required. Audio encoding of the synthesized audio content.
390      "synthesizeSpeechConfig": { # Configuration of how speech should be synthesized. # Optional. Configuration of how speech should be synthesized.
391        "effectsProfileId": [ # Optional. An identifier which selects 'audio effects' profiles that are
392            # applied on (post synthesized) text to speech. Effects are applied on top of
393            # each other in the order they are given.
394          "A String",
395        ],
396        "voice": { # Description of which voice to use for speech synthesis. # Optional. The desired voice of the synthesized audio.
397          "ssmlGender": "A String", # Optional. The preferred gender of the voice. If not set, the service will
398              # choose a voice based on the other parameters such as language_code and
399              # name. Note that this is only a preference, not requirement. If a
400              # voice of the appropriate gender is not available, the synthesizer should
401              # substitute a voice with a different gender rather than failing the request.
402          "name": "A String", # Optional. The name of the voice. If not set, the service will choose a
403              # voice based on the other parameters such as language_code and gender.
404        },
405        "speakingRate": 3.14, # Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal
406            # native speed supported by the specific voice. 2.0 is twice as fast, and
407            # 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any
408            # other values < 0.25 or > 4.0 will return an error.
409        "volumeGainDb": 3.14, # Optional. Volume gain (in dB) of the normal native volume supported by the
410            # specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of
411            # 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB)
412            # will play at approximately half the amplitude of the normal native signal
413            # amplitude. A value of +6.0 (dB) will play at approximately twice the
414            # amplitude of the normal native signal amplitude. We strongly recommend not
415            # to exceed +10 (dB) as there's usually no effective increase in loudness for
416            # any value greater than that.
417        "pitch": 3.14, # Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20
418            # semitones from the original pitch. -20 means decrease 20 semitones from the
419            # original pitch.
420      },
421    },
422    "queryResult": { # Represents the result of conversational query or event processing. # The selected results of the conversational query or event processing.
423        # See `alternative_query_results` for additional potential results.
424      "sentimentAnalysisResult": { # The result of sentiment analysis as configured by # The sentiment analysis result, which depends on the
425          # `sentiment_analysis_request_config` specified in the request.
426          # `sentiment_analysis_request_config`.
427        "queryTextSentiment": { # The sentiment, such as positive/negative feeling or association, for a unit # The sentiment analysis result for `query_text`.
428            # of analysis, such as the query text.
429          "score": 3.14, # Sentiment score between -1.0 (negative sentiment) and 1.0 (positive
430              # sentiment).
431          "magnitude": 3.14, # A non-negative number in the [0, +inf) range, which represents the absolute
432              # magnitude of sentiment, regardless of score (positive or negative).
433        },
434      },
435      "fulfillmentText": "A String", # The text to be pronounced to the user or shown on the screen.
436          # Note: This is a legacy field, `fulfillment_messages` should be preferred.
437      "knowledgeAnswers": { # Represents the result of querying a Knowledge base. # The result from Knowledge Connector (if any), ordered by decreasing
438          # `KnowledgeAnswers.match_confidence`.
439        "answers": [ # A list of answers from Knowledge Connector.
440          { # An answer from Knowledge Connector.
441            "answer": "A String", # The piece of text from the `source` knowledge base document that answers
442                # this conversational query.
443            "source": "A String", # Indicates which Knowledge Document this answer was extracted from.
444                # Format: `projects/<Project ID>/knowledgeBases/<Knowledge Base
445                # ID>/documents/<Document ID>`.
446            "matchConfidenceLevel": "A String", # The system's confidence level that this knowledge answer is a good match
447                # for this conversational query.
448                # NOTE: The confidence level for a given `<query, answer>` pair may change
449                # without notice, as it depends on models that are constantly being
450                # improved. However, it will change less frequently than the confidence
451                # score below, and should be preferred for referencing the quality of an
452                # answer.
453            "matchConfidence": 3.14, # The system's confidence score that this Knowledge answer is a good match
454                # for this conversational query.
455                # The range is from 0.0 (completely uncertain) to 1.0 (completely certain).
456                # Note: The confidence score is likely to vary somewhat (possibly even for
457                # identical requests), as the underlying model is under constant
458                # improvement. It may be deprecated in the future. We recommend using
459                # `match_confidence_level` which should be generally more stable.
460            "faqQuestion": "A String", # The corresponding FAQ question if the answer was extracted from a FAQ
461                # Document, empty otherwise.
462          },
463        ],
464      },
465      "parameters": { # The collection of extracted parameters.
466        "a_key": "", # Properties of the object.
467      },
468      "languageCode": "A String", # The language that was triggered during intent detection.
469          # See [Language
470          # Support](https://cloud.google.com/dialogflow-enterprise/docs/reference/language)
471          # for a list of the currently supported language codes.
472      "speechRecognitionConfidence": 3.14, # The Speech recognition confidence between 0.0 and 1.0. A higher number
473          # indicates an estimated greater likelihood that the recognized words are
474          # correct. The default of 0.0 is a sentinel value indicating that confidence
475          # was not set.
476          #
477          # This field is not guaranteed to be accurate or set. In particular this
478          # field isn't set for StreamingDetectIntent since the streaming endpoint has
479          # separate confidence estimates per portion of the audio in
480          # StreamingRecognitionResult.
481      "intentDetectionConfidence": 3.14, # The intent detection confidence. Values range from 0.0
482          # (completely uncertain) to 1.0 (completely certain).
483          # If there are `multiple knowledge_answers` messages, this value is set to
484          # the greatest `knowledgeAnswers.match_confidence` value in the list.
485      "fulfillmentMessages": [ # The collection of rich messages to present to the user.
486        { # Corresponds to the `Response` field in the Dialogflow console.
487          "simpleResponses": { # The collection of simple response candidates. # Returns a voice or text-only response for Actions on Google.
488              # This message in `QueryResult.fulfillment_messages` and
489              # `WebhookResponse.fulfillment_messages` should contain only one
490              # `SimpleResponse`.
491            "simpleResponses": [ # Required. The list of simple responses.
492              { # The simple response message containing speech or text.
493                "textToSpeech": "A String", # One of text_to_speech or ssml must be provided. The plain text of the
494                    # speech output. Mutually exclusive with ssml.
495                "displayText": "A String", # Optional. The text to display.
496                "ssml": "A String", # One of text_to_speech or ssml must be provided. Structured spoken
497                    # response to the user in the SSML format. Mutually exclusive with
498                    # text_to_speech.
499              },
500            ],
501          },
502          "quickReplies": { # The quick replies response message. # Displays quick replies.
503            "quickReplies": [ # Optional. The collection of quick replies.
504              "A String",
505            ],
506            "title": "A String", # Optional. The title of the collection of quick replies.
507          },
508          "platform": "A String", # Optional. The platform that this message is intended for.
509          "text": { # The text response message. # Returns a text response.
510            "text": [ # Optional. The collection of the agent's responses.
511              "A String",
512            ],
513          },
514          "image": { # The image response message. # Displays an image.
515            "accessibilityText": "A String", # A text description of the image to be used for accessibility,
516                # e.g., screen readers. Required if image_uri is set for CarouselSelect.
517            "imageUri": "A String", # Optional. The public URI to an image file.
518          },
519          "telephonySynthesizeSpeech": { # Synthesizes speech and plays back the synthesized audio to the caller in # Synthesizes speech in Telephony Gateway.
520              # Telephony Gateway.
521              #
522              # Telephony Gateway takes the synthesizer settings from
523              # `DetectIntentResponse.output_audio_config` which can either be set
524              # at request-level or can come from the agent-level synthesizer config.
525            "ssml": "A String", # The SSML to be synthesized. For more information, see
526                # [SSML](https://developers.google.com/actions/reference/ssml).
527            "text": "A String", # The raw text to be synthesized.
528          },
529          "suggestions": { # The collection of suggestions. # Displays suggestion chips for Actions on Google.
530            "suggestions": [ # Required. The list of suggested replies.
531              { # The suggestion chip message that the user can tap to quickly post a reply
532                  # to the conversation.
533                "title": "A String", # Required. The text shown the in the suggestion chip.
534              },
535            ],
536          },
537          "telephonyPlayAudio": { # Plays audio from a file in Telephony Gateway. # Plays audio from a file in Telephony Gateway.
538            "audioUri": "A String", # Required. URI to a Google Cloud Storage object containing the audio to
539                # play, e.g., "gs://bucket/object". The object must contain a single
540                # channel (mono) of linear PCM audio (2 bytes / sample) at 8kHz.
541                #
542                # This object must be readable by the `service-<Project
543                # Number>@gcp-sa-dialogflow.iam.gserviceaccount.com` service account
544                # where <Project Number> is the number of the Telephony Gateway project
545                # (usually the same as the Dialogflow agent project). If the Google Cloud
546                # Storage bucket is in the Telephony Gateway project, this permission is
547                # added by default when enabling the Dialogflow V2 API.
548                #
549                # For audio from other sources, consider using the
550                # `TelephonySynthesizeSpeech` message with SSML.
551          },
552          "linkOutSuggestion": { # The suggestion chip message that allows the user to jump out to the app # Displays a link out suggestion chip for Actions on Google.
553              # or website associated with this agent.
554            "uri": "A String", # Required. The URI of the app or site to open when the user taps the
555                # suggestion chip.
556            "destinationName": "A String", # Required. The name of the app or site this chip is linking to.
557          },
558          "basicCard": { # The basic card message. Useful for displaying information. # Displays a basic card for Actions on Google.
559            "buttons": [ # Optional. The collection of card buttons.
560              { # The button object that appears at the bottom of a card.
561                "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
562                  "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
563                },
564                "title": "A String", # Required. The title of the button.
565              },
566            ],
567            "formattedText": "A String", # Required, unless image is present. The body text of the card.
568            "image": { # The image response message. # Optional. The image for the card.
569              "accessibilityText": "A String", # A text description of the image to be used for accessibility,
570                  # e.g., screen readers. Required if image_uri is set for CarouselSelect.
571              "imageUri": "A String", # Optional. The public URI to an image file.
572            },
573            "subtitle": "A String", # Optional. The subtitle of the card.
574            "title": "A String", # Optional. The title of the card.
575          },
576          "carouselSelect": { # The card for presenting a carousel of options to select from. # Displays a carousel card for Actions on Google.
577            "items": [ # Required. Carousel items.
578              { # An item in the carousel.
579                "info": { # Additional info about the select item for when it is triggered in a # Required. Additional info about the option item.
580                    # dialog.
581                  "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
582                      # item in dialog.
583                    "A String",
584                  ],
585                  "key": "A String", # Required. A unique key that will be sent back to the agent if this
586                      # response is given.
587                },
588                "image": { # The image response message. # Optional. The image to display.
589                  "accessibilityText": "A String", # A text description of the image to be used for accessibility,
590                      # e.g., screen readers. Required if image_uri is set for CarouselSelect.
591                  "imageUri": "A String", # Optional. The public URI to an image file.
592                },
593                "description": "A String", # Optional. The body text of the card.
594                "title": "A String", # Required. Title of the carousel item.
595              },
596            ],
597          },
598          "listSelect": { # The card for presenting a list of options to select from. # Displays a list card for Actions on Google.
599            "items": [ # Required. List items.
600              { # An item in the list.
601                "info": { # Additional info about the select item for when it is triggered in a # Required. Additional information about this option.
602                    # dialog.
603                  "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
604                      # item in dialog.
605                    "A String",
606                  ],
607                  "key": "A String", # Required. A unique key that will be sent back to the agent if this
608                      # response is given.
609                },
610                "image": { # The image response message. # Optional. The image to display.
611                  "accessibilityText": "A String", # A text description of the image to be used for accessibility,
612                      # e.g., screen readers. Required if image_uri is set for CarouselSelect.
613                  "imageUri": "A String", # Optional. The public URI to an image file.
614                },
615                "description": "A String", # Optional. The main text describing the item.
616                "title": "A String", # Required. The title of the list item.
617              },
618            ],
619            "title": "A String", # Optional. The overall title of the list.
620          },
621          "telephonyTransferCall": { # Transfers the call in Telephony Gateway. # Transfers the call in Telephony Gateway.
622            "phoneNumber": "A String", # Required. The phone number to transfer the call to
623                # in [E.164 format](https://en.wikipedia.org/wiki/E.164).
624                #
625                # We currently only allow transferring to US numbers (+1xxxyyyzzzz).
626          },
627          "payload": { # Returns a response containing a custom, platform-specific payload.
628              # See the Intent.Message.Platform type for a description of the
629              # structure that may be required for your platform.
630            "a_key": "", # Properties of the object.
631          },
632          "card": { # The card response message. # Displays a card.
633            "buttons": [ # Optional. The collection of card buttons.
634              { # Optional. Contains information about a button.
635                "text": "A String", # Optional. The text to show on the button.
636                "postback": "A String", # Optional. The text to send back to the Dialogflow API or a URI to
637                    # open.
638              },
639            ],
640            "title": "A String", # Optional. The title of the card.
641            "subtitle": "A String", # Optional. The subtitle of the card.
642            "imageUri": "A String", # Optional. The public URI to an image file for the card.
643          },
644        },
645      ],
646      "action": "A String", # The action name from the matched intent.
647      "intent": { # Represents an intent. # The intent that matched the conversational query. Some, not
648          # all fields are filled in this message, including but not limited to:
649          # `name`, `display_name` and `webhook_state`.
650          # Intents convert a number of user expressions or patterns into an action. An
651          # action is an extraction of a user command or sentence semantics.
652        "isFallback": True or False, # Optional. Indicates whether this is a fallback intent.
653        "mlDisabled": True or False, # Optional. Indicates whether Machine Learning is disabled for the intent.
654            # Note: If `ml_disabled` setting is set to true, then this intent is not
655            # taken into account during inference in `ML ONLY` match mode. Also,
656            # auto-markup in the UI is turned off.
657        "displayName": "A String", # Required. The name of this intent.
658        "name": "A String", # The unique identifier of this intent.
659            # Required for Intents.UpdateIntent and Intents.BatchUpdateIntents
660            # methods.
661            # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
662        "parameters": [ # Optional. The collection of parameters associated with the intent.
663          { # Represents intent parameters.
664            "mandatory": True or False, # Optional. Indicates whether the parameter is required. That is,
665                # whether the intent cannot be completed without collecting the parameter
666                # value.
667            "name": "A String", # The unique identifier of this parameter.
668            "defaultValue": "A String", # Optional. The default value to use when the `value` yields an empty
669                # result.
670                # Default values can be extracted from contexts by using the following
671                # syntax: `#context_name.parameter_name`.
672            "entityTypeDisplayName": "A String", # Optional. The name of the entity type, prefixed with `@`, that
673                # describes values of the parameter. If the parameter is
674                # required, this must be provided.
675            "value": "A String", # Optional. The definition of the parameter value. It can be:
676                # - a constant string,
677                # - a parameter value defined as `$parameter_name`,
678                # - an original parameter value defined as `$parameter_name.original`,
679                # - a parameter value from some context defined as
680                #   `#context_name.parameter_name`.
681            "prompts": [ # Optional. The collection of prompts that the agent can present to the
682                # user in order to collect value for the parameter.
683              "A String",
684            ],
685            "isList": True or False, # Optional. Indicates whether the parameter represents a list of values.
686            "displayName": "A String", # Required. The name of the parameter.
687          },
688        ],
689        "trainingPhrases": [ # Optional. The collection of examples that the agent is
690            # trained on.
691          { # Represents an example that the agent is trained on.
692            "parts": [ # Required. The ordered list of training phrase parts.
693                # The parts are concatenated in order to form the training phrase.
694                #
695                # Note: The API does not automatically annotate training phrases like the
696                # Dialogflow Console does.
697                #
698                # Note: Do not forget to include whitespace at part boundaries,
699                # so the training phrase is well formatted when the parts are concatenated.
700                #
701                # If the training phrase does not need to be annotated with parameters,
702                # you just need a single part with only the Part.text field set.
703                #
704                # If you want to annotate the training phrase, you must create multiple
705                # parts, where the fields of each part are populated in one of two ways:
706                #
707                # -   `Part.text` is set to a part of the phrase that has no parameters.
708                # -   `Part.text` is set to a part of the phrase that you want to annotate,
709                #     and the `entity_type`, `alias`, and `user_defined` fields are all
710                #     set.
711              { # Represents a part of a training phrase.
712                "text": "A String", # Required. The text for this part.
713                "entityType": "A String", # Optional. The entity type name prefixed with `@`.
714                    # This field is required for annotated parts of the training phrase.
715                "userDefined": True or False, # Optional. Indicates whether the text was manually annotated.
716                    # This field is set to true when the Dialogflow Console is used to
717                    # manually annotate the part. When creating an annotated part with the
718                    # API, you must set this to true.
719                "alias": "A String", # Optional. The parameter name for the value extracted from the
720                    # annotated part of the example.
721                    # This field is required for annotated parts of the training phrase.
722              },
723            ],
724            "type": "A String", # Required. The type of the training phrase.
725            "name": "A String", # Output only. The unique identifier of this training phrase.
726            "timesAddedCount": 42, # Optional. Indicates how many times this example was added to
727                # the intent. Each time a developer adds an existing sample by editing an
728                # intent or training, this counter is increased.
729          },
730        ],
731        "followupIntentInfo": [ # Read-only. Information about all followup intents that have this intent as
732            # a direct or indirect parent. We populate this field only in the output.
733          { # Represents a single followup intent in the chain.
734            "followupIntentName": "A String", # The unique identifier of the followup intent.
735                # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
736            "parentFollowupIntentName": "A String", # The unique identifier of the followup intent's parent.
737                # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
738          },
739        ],
740        "webhookState": "A String", # Optional. Indicates whether webhooks are enabled for the intent.
741        "resetContexts": True or False, # Optional. Indicates whether to delete all contexts in the current
742            # session when this intent is matched.
743        "messages": [ # Optional. The collection of rich messages corresponding to the
744            # `Response` field in the Dialogflow console.
745          { # Corresponds to the `Response` field in the Dialogflow console.
746            "simpleResponses": { # The collection of simple response candidates. # Returns a voice or text-only response for Actions on Google.
747                # This message in `QueryResult.fulfillment_messages` and
748                # `WebhookResponse.fulfillment_messages` should contain only one
749                # `SimpleResponse`.
750              "simpleResponses": [ # Required. The list of simple responses.
751                { # The simple response message containing speech or text.
752                  "textToSpeech": "A String", # One of text_to_speech or ssml must be provided. The plain text of the
753                      # speech output. Mutually exclusive with ssml.
754                  "displayText": "A String", # Optional. The text to display.
755                  "ssml": "A String", # One of text_to_speech or ssml must be provided. Structured spoken
756                      # response to the user in the SSML format. Mutually exclusive with
757                      # text_to_speech.
758                },
759              ],
760            },
761            "quickReplies": { # The quick replies response message. # Displays quick replies.
762              "quickReplies": [ # Optional. The collection of quick replies.
763                "A String",
764              ],
765              "title": "A String", # Optional. The title of the collection of quick replies.
766            },
767            "platform": "A String", # Optional. The platform that this message is intended for.
768            "text": { # The text response message. # Returns a text response.
769              "text": [ # Optional. The collection of the agent's responses.
770                "A String",
771              ],
772            },
773            "image": { # The image response message. # Displays an image.
774              "accessibilityText": "A String", # A text description of the image to be used for accessibility,
775                  # e.g., screen readers. Required if image_uri is set for CarouselSelect.
776              "imageUri": "A String", # Optional. The public URI to an image file.
777            },
778            "telephonySynthesizeSpeech": { # Synthesizes speech and plays back the synthesized audio to the caller in # Synthesizes speech in Telephony Gateway.
779                # Telephony Gateway.
780                #
781                # Telephony Gateway takes the synthesizer settings from
782                # `DetectIntentResponse.output_audio_config` which can either be set
783                # at request-level or can come from the agent-level synthesizer config.
784              "ssml": "A String", # The SSML to be synthesized. For more information, see
785                  # [SSML](https://developers.google.com/actions/reference/ssml).
786              "text": "A String", # The raw text to be synthesized.
787            },
788            "suggestions": { # The collection of suggestions. # Displays suggestion chips for Actions on Google.
789              "suggestions": [ # Required. The list of suggested replies.
790                { # The suggestion chip message that the user can tap to quickly post a reply
791                    # to the conversation.
792                  "title": "A String", # Required. The text shown the in the suggestion chip.
793                },
794              ],
795            },
796            "telephonyPlayAudio": { # Plays audio from a file in Telephony Gateway. # Plays audio from a file in Telephony Gateway.
797              "audioUri": "A String", # Required. URI to a Google Cloud Storage object containing the audio to
798                  # play, e.g., "gs://bucket/object". The object must contain a single
799                  # channel (mono) of linear PCM audio (2 bytes / sample) at 8kHz.
800                  #
801                  # This object must be readable by the `service-<Project
802                  # Number>@gcp-sa-dialogflow.iam.gserviceaccount.com` service account
803                  # where <Project Number> is the number of the Telephony Gateway project
804                  # (usually the same as the Dialogflow agent project). If the Google Cloud
805                  # Storage bucket is in the Telephony Gateway project, this permission is
806                  # added by default when enabling the Dialogflow V2 API.
807                  #
808                  # For audio from other sources, consider using the
809                  # `TelephonySynthesizeSpeech` message with SSML.
810            },
811            "linkOutSuggestion": { # The suggestion chip message that allows the user to jump out to the app # Displays a link out suggestion chip for Actions on Google.
812                # or website associated with this agent.
813              "uri": "A String", # Required. The URI of the app or site to open when the user taps the
814                  # suggestion chip.
815              "destinationName": "A String", # Required. The name of the app or site this chip is linking to.
816            },
817            "basicCard": { # The basic card message. Useful for displaying information. # Displays a basic card for Actions on Google.
818              "buttons": [ # Optional. The collection of card buttons.
819                { # The button object that appears at the bottom of a card.
820                  "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
821                    "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
822                  },
823                  "title": "A String", # Required. The title of the button.
824                },
825              ],
826              "formattedText": "A String", # Required, unless image is present. The body text of the card.
827              "image": { # The image response message. # Optional. The image for the card.
828                "accessibilityText": "A String", # A text description of the image to be used for accessibility,
829                    # e.g., screen readers. Required if image_uri is set for CarouselSelect.
830                "imageUri": "A String", # Optional. The public URI to an image file.
831              },
832              "subtitle": "A String", # Optional. The subtitle of the card.
833              "title": "A String", # Optional. The title of the card.
834            },
835            "carouselSelect": { # The card for presenting a carousel of options to select from. # Displays a carousel card for Actions on Google.
836              "items": [ # Required. Carousel items.
837                { # An item in the carousel.
838                  "info": { # Additional info about the select item for when it is triggered in a # Required. Additional info about the option item.
839                      # dialog.
840                    "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
841                        # item in dialog.
842                      "A String",
843                    ],
844                    "key": "A String", # Required. A unique key that will be sent back to the agent if this
845                        # response is given.
846                  },
847                  "image": { # The image response message. # Optional. The image to display.
848                    "accessibilityText": "A String", # A text description of the image to be used for accessibility,
849                        # e.g., screen readers. Required if image_uri is set for CarouselSelect.
850                    "imageUri": "A String", # Optional. The public URI to an image file.
851                  },
852                  "description": "A String", # Optional. The body text of the card.
853                  "title": "A String", # Required. Title of the carousel item.
854                },
855              ],
856            },
857            "listSelect": { # The card for presenting a list of options to select from. # Displays a list card for Actions on Google.
858              "items": [ # Required. List items.
859                { # An item in the list.
860                  "info": { # Additional info about the select item for when it is triggered in a # Required. Additional information about this option.
861                      # dialog.
862                    "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
863                        # item in dialog.
864                      "A String",
865                    ],
866                    "key": "A String", # Required. A unique key that will be sent back to the agent if this
867                        # response is given.
868                  },
869                  "image": { # The image response message. # Optional. The image to display.
870                    "accessibilityText": "A String", # A text description of the image to be used for accessibility,
871                        # e.g., screen readers. Required if image_uri is set for CarouselSelect.
872                    "imageUri": "A String", # Optional. The public URI to an image file.
873                  },
874                  "description": "A String", # Optional. The main text describing the item.
875                  "title": "A String", # Required. The title of the list item.
876                },
877              ],
878              "title": "A String", # Optional. The overall title of the list.
879            },
880            "telephonyTransferCall": { # Transfers the call in Telephony Gateway. # Transfers the call in Telephony Gateway.
881              "phoneNumber": "A String", # Required. The phone number to transfer the call to
882                  # in [E.164 format](https://en.wikipedia.org/wiki/E.164).
883                  #
884                  # We currently only allow transferring to US numbers (+1xxxyyyzzzz).
885            },
886            "payload": { # Returns a response containing a custom, platform-specific payload.
887                # See the Intent.Message.Platform type for a description of the
888                # structure that may be required for your platform.
889              "a_key": "", # Properties of the object.
890            },
891            "card": { # The card response message. # Displays a card.
892              "buttons": [ # Optional. The collection of card buttons.
893                { # Optional. Contains information about a button.
894                  "text": "A String", # Optional. The text to show on the button.
895                  "postback": "A String", # Optional. The text to send back to the Dialogflow API or a URI to
896                      # open.
897                },
898              ],
899              "title": "A String", # Optional. The title of the card.
900              "subtitle": "A String", # Optional. The subtitle of the card.
901              "imageUri": "A String", # Optional. The public URI to an image file for the card.
902            },
903          },
904        ],
905        "parentFollowupIntentName": "A String", # Read-only after creation. The unique identifier of the parent intent in the
906            # chain of followup intents. You can set this field when creating an intent,
907            # for example with CreateIntent or BatchUpdateIntents, in order to
908            # make this intent a followup intent.
909            #
910            # It identifies the parent followup intent.
911            # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
912        "defaultResponsePlatforms": [ # Optional. The list of platforms for which the first responses will be
913            # copied from the messages in PLATFORM_UNSPECIFIED (i.e. default platform).
914          "A String",
915        ],
916        "priority": 42, # Optional. The priority of this intent. Higher numbers represent higher
917            # priorities. If this is zero or unspecified, we use the default
918            # priority 500000.
919            #
920            # Negative numbers mean that the intent is disabled.
921        "rootFollowupIntentName": "A String", # Read-only. The unique identifier of the root intent in the chain of
922            # followup intents. It identifies the correct followup intents chain for
923            # this intent. We populate this field only in the output.
924            #
925            # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
926        "endInteraction": True or False, # Optional. Indicates that this intent ends an interaction. Some integrations
927            # (e.g., Actions on Google or Dialogflow phone gateway) use this information
928            # to close interaction with an end user. Default is false.
929        "inputContextNames": [ # Optional. The list of context names required for this intent to be
930            # triggered.
931            # Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
932          "A String",
933        ],
934        "mlEnabled": True or False, # Optional. Indicates whether Machine Learning is enabled for the intent.
935            # Note: If `ml_enabled` setting is set to false, then this intent is not
936            # taken into account during inference in `ML ONLY` match mode. Also,
937            # auto-markup in the UI is turned off.
938            # DEPRECATED! Please use `ml_disabled` field instead.
939            # NOTE: If both `ml_enabled` and `ml_disabled` are either not set or false,
940            # then the default value is determined as follows:
941            # - Before April 15th, 2018 the default is:
942            #   ml_enabled = false / ml_disabled = true.
943            # - After April 15th, 2018 the default is:
944            #   ml_enabled = true / ml_disabled = false.
945        "action": "A String", # Optional. The name of the action associated with the intent.
946            # Note: The action name must not contain whitespaces.
947        "outputContexts": [ # Optional. The collection of contexts that are activated when the intent
948            # is matched. Context messages in this collection should not set the
949            # parameters field. Setting the `lifespan_count` to 0 will reset the context
950            # when the intent is matched.
951            # Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
952          { # Represents a context.
953            "parameters": { # Optional. The collection of parameters associated with this context.
954                # Refer to [this
955                # doc](https://cloud.google.com/dialogflow-enterprise/docs/intents-actions-parameters)
956                # for syntax.
957              "a_key": "", # Properties of the object.
958            },
959            "name": "A String", # Required. The unique identifier of the context. Format:
960                # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
961                # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
962                # ID>/sessions/<Session ID>/contexts/<Context ID>`.
963                #
964                # The `Context ID` is always converted to lowercase, may only contain
965                # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
966                #
967                # If `Environment ID` is not specified, we assume default 'draft'
968                # environment. If `User ID` is not specified, we assume default '-' user.
969            "lifespanCount": 42, # Optional. The number of conversational query requests after which the
970                # context expires. If set to `0` (the default) the context expires
971                # immediately. Contexts expire automatically after 20 minutes if there
972                # are no matching queries.
973          },
974        ],
975        "events": [ # Optional. The collection of event names that trigger the intent.
976            # If the collection of input contexts is not empty, all of the contexts must
977            # be present in the active user session for an event to trigger this intent.
978          "A String",
979        ],
980      },
981      "webhookPayload": { # If the query was fulfilled by a webhook call, this field is set to the
982          # value of the `payload` field returned in the webhook response.
983        "a_key": "", # Properties of the object.
984      },
985      "diagnosticInfo": { # The free-form diagnostic info. For example, this field could contain
986          # webhook call latency. The string keys of the Struct's fields map can change
987          # without notice.
988        "a_key": "", # Properties of the object.
989      },
990      "queryText": "A String", # The original conversational query text:
991          #
992          # - If natural language text was provided as input, `query_text` contains
993          #   a copy of the input.
994          # - If natural language speech audio was provided as input, `query_text`
995          #   contains the speech recognition result. If speech recognizer produced
996          #   multiple alternatives, a particular one is picked.
997          # - If automatic spell correction is enabled, `query_text` will contain the
998          #   corrected user input.
999      "outputContexts": [ # The collection of output contexts. If applicable,
1000          # `output_contexts.parameters` contains entries with name
1001          # `<parameter name>.original` containing the original parameter values
1002          # before the query.
1003        { # Represents a context.
1004          "parameters": { # Optional. The collection of parameters associated with this context.
1005              # Refer to [this
1006              # doc](https://cloud.google.com/dialogflow-enterprise/docs/intents-actions-parameters)
1007              # for syntax.
1008            "a_key": "", # Properties of the object.
1009          },
1010          "name": "A String", # Required. The unique identifier of the context. Format:
1011              # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
1012              # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
1013              # ID>/sessions/<Session ID>/contexts/<Context ID>`.
1014              #
1015              # The `Context ID` is always converted to lowercase, may only contain
1016              # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
1017              #
1018              # If `Environment ID` is not specified, we assume default 'draft'
1019              # environment. If `User ID` is not specified, we assume default '-' user.
1020          "lifespanCount": 42, # Optional. The number of conversational query requests after which the
1021              # context expires. If set to `0` (the default) the context expires
1022              # immediately. Contexts expire automatically after 20 minutes if there
1023              # are no matching queries.
1024        },
1025      ],
1026      "webhookSource": "A String", # If the query was fulfilled by a webhook call, this field is set to the
1027          # value of the `source` field returned in the webhook response.
1028      "allRequiredParamsPresent": True or False, # This field is set to:
1029          #
1030          # - `false` if the matched intent has required parameters and not all of
1031          #    the required parameter values have been collected.
1032          # - `true` if all required parameter values have been collected, or if the
1033          #    matched intent doesn't contain any required parameters.
1034    },
1035    "responseId": "A String", # The unique identifier of the response. It can be used to
1036        # locate a response in the training example set or for reporting issues.
1037    "outputAudio": "A String", # The audio data bytes encoded as specified in the request.
1038        # Note: The output audio is generated based on the values of default platform
1039        # text responses found in the `query_result.fulfillment_messages` field. If
1040        # multiple default text responses exist, they will be concatenated when
1041        # generating audio. If no default platform text responses exist, the
1042        # generated audio content will be empty.
1043    "alternativeQueryResults": [ # If Knowledge Connectors are enabled, there could be more than one result
1044        # returned for a given query or event, and this field will contain all
1045        # results except for the top one, which is captured in query_result. The
1046        # alternative results are ordered by decreasing
1047        # `QueryResult.intent_detection_confidence`. If Knowledge Connectors are
1048        # disabled, this field will be empty until multiple responses for regular
1049        # intents are supported, at which point those additional results will be
1050        # surfaced here.
1051      { # Represents the result of conversational query or event processing.
1052        "sentimentAnalysisResult": { # The result of sentiment analysis as configured by # The sentiment analysis result, which depends on the
1053            # `sentiment_analysis_request_config` specified in the request.
1054            # `sentiment_analysis_request_config`.
1055          "queryTextSentiment": { # The sentiment, such as positive/negative feeling or association, for a unit # The sentiment analysis result for `query_text`.
1056              # of analysis, such as the query text.
1057            "score": 3.14, # Sentiment score between -1.0 (negative sentiment) and 1.0 (positive
1058                # sentiment).
1059            "magnitude": 3.14, # A non-negative number in the [0, +inf) range, which represents the absolute
1060                # magnitude of sentiment, regardless of score (positive or negative).
1061          },
1062        },
1063        "fulfillmentText": "A String", # The text to be pronounced to the user or shown on the screen.
1064            # Note: This is a legacy field, `fulfillment_messages` should be preferred.
1065        "knowledgeAnswers": { # Represents the result of querying a Knowledge base. # The result from Knowledge Connector (if any), ordered by decreasing
1066            # `KnowledgeAnswers.match_confidence`.
1067          "answers": [ # A list of answers from Knowledge Connector.
1068            { # An answer from Knowledge Connector.
1069              "answer": "A String", # The piece of text from the `source` knowledge base document that answers
1070                  # this conversational query.
1071              "source": "A String", # Indicates which Knowledge Document this answer was extracted from.
1072                  # Format: `projects/<Project ID>/knowledgeBases/<Knowledge Base
1073                  # ID>/documents/<Document ID>`.
1074              "matchConfidenceLevel": "A String", # The system's confidence level that this knowledge answer is a good match
1075                  # for this conversational query.
1076                  # NOTE: The confidence level for a given `<query, answer>` pair may change
1077                  # without notice, as it depends on models that are constantly being
1078                  # improved. However, it will change less frequently than the confidence
1079                  # score below, and should be preferred for referencing the quality of an
1080                  # answer.
1081              "matchConfidence": 3.14, # The system's confidence score that this Knowledge answer is a good match
1082                  # for this conversational query.
1083                  # The range is from 0.0 (completely uncertain) to 1.0 (completely certain).
1084                  # Note: The confidence score is likely to vary somewhat (possibly even for
1085                  # identical requests), as the underlying model is under constant
1086                  # improvement. It may be deprecated in the future. We recommend using
1087                  # `match_confidence_level` which should be generally more stable.
1088              "faqQuestion": "A String", # The corresponding FAQ question if the answer was extracted from a FAQ
1089                  # Document, empty otherwise.
1090            },
1091          ],
1092        },
1093        "parameters": { # The collection of extracted parameters.
1094          "a_key": "", # Properties of the object.
1095        },
1096        "languageCode": "A String", # The language that was triggered during intent detection.
1097            # See [Language
1098            # Support](https://cloud.google.com/dialogflow-enterprise/docs/reference/language)
1099            # for a list of the currently supported language codes.
1100        "speechRecognitionConfidence": 3.14, # The Speech recognition confidence between 0.0 and 1.0. A higher number
1101            # indicates an estimated greater likelihood that the recognized words are
1102            # correct. The default of 0.0 is a sentinel value indicating that confidence
1103            # was not set.
1104            #
1105            # This field is not guaranteed to be accurate or set. In particular this
1106            # field isn't set for StreamingDetectIntent since the streaming endpoint has
1107            # separate confidence estimates per portion of the audio in
1108            # StreamingRecognitionResult.
1109        "intentDetectionConfidence": 3.14, # The intent detection confidence. Values range from 0.0
1110            # (completely uncertain) to 1.0 (completely certain).
1111            # If there are `multiple knowledge_answers` messages, this value is set to
1112            # the greatest `knowledgeAnswers.match_confidence` value in the list.
1113        "fulfillmentMessages": [ # The collection of rich messages to present to the user.
1114          { # Corresponds to the `Response` field in the Dialogflow console.
1115            "simpleResponses": { # The collection of simple response candidates. # Returns a voice or text-only response for Actions on Google.
1116                # This message in `QueryResult.fulfillment_messages` and
1117                # `WebhookResponse.fulfillment_messages` should contain only one
1118                # `SimpleResponse`.
1119              "simpleResponses": [ # Required. The list of simple responses.
1120                { # The simple response message containing speech or text.
1121                  "textToSpeech": "A String", # One of text_to_speech or ssml must be provided. The plain text of the
1122                      # speech output. Mutually exclusive with ssml.
1123                  "displayText": "A String", # Optional. The text to display.
1124                  "ssml": "A String", # One of text_to_speech or ssml must be provided. Structured spoken
1125                      # response to the user in the SSML format. Mutually exclusive with
1126                      # text_to_speech.
1127                },
1128              ],
1129            },
1130            "quickReplies": { # The quick replies response message. # Displays quick replies.
1131              "quickReplies": [ # Optional. The collection of quick replies.
1132                "A String",
1133              ],
1134              "title": "A String", # Optional. The title of the collection of quick replies.
1135            },
1136            "platform": "A String", # Optional. The platform that this message is intended for.
1137            "text": { # The text response message. # Returns a text response.
1138              "text": [ # Optional. The collection of the agent's responses.
1139                "A String",
1140              ],
1141            },
1142            "image": { # The image response message. # Displays an image.
1143              "accessibilityText": "A String", # A text description of the image to be used for accessibility,
1144                  # e.g., screen readers. Required if image_uri is set for CarouselSelect.
1145              "imageUri": "A String", # Optional. The public URI to an image file.
1146            },
1147            "telephonySynthesizeSpeech": { # Synthesizes speech and plays back the synthesized audio to the caller in # Synthesizes speech in Telephony Gateway.
1148                # Telephony Gateway.
1149                #
1150                # Telephony Gateway takes the synthesizer settings from
1151                # `DetectIntentResponse.output_audio_config` which can either be set
1152                # at request-level or can come from the agent-level synthesizer config.
1153              "ssml": "A String", # The SSML to be synthesized. For more information, see
1154                  # [SSML](https://developers.google.com/actions/reference/ssml).
1155              "text": "A String", # The raw text to be synthesized.
1156            },
1157            "suggestions": { # The collection of suggestions. # Displays suggestion chips for Actions on Google.
1158              "suggestions": [ # Required. The list of suggested replies.
1159                { # The suggestion chip message that the user can tap to quickly post a reply
1160                    # to the conversation.
1161                  "title": "A String", # Required. The text shown the in the suggestion chip.
1162                },
1163              ],
1164            },
1165            "telephonyPlayAudio": { # Plays audio from a file in Telephony Gateway. # Plays audio from a file in Telephony Gateway.
1166              "audioUri": "A String", # Required. URI to a Google Cloud Storage object containing the audio to
1167                  # play, e.g., "gs://bucket/object". The object must contain a single
1168                  # channel (mono) of linear PCM audio (2 bytes / sample) at 8kHz.
1169                  #
1170                  # This object must be readable by the `service-<Project
1171                  # Number>@gcp-sa-dialogflow.iam.gserviceaccount.com` service account
1172                  # where <Project Number> is the number of the Telephony Gateway project
1173                  # (usually the same as the Dialogflow agent project). If the Google Cloud
1174                  # Storage bucket is in the Telephony Gateway project, this permission is
1175                  # added by default when enabling the Dialogflow V2 API.
1176                  #
1177                  # For audio from other sources, consider using the
1178                  # `TelephonySynthesizeSpeech` message with SSML.
1179            },
1180            "linkOutSuggestion": { # The suggestion chip message that allows the user to jump out to the app # Displays a link out suggestion chip for Actions on Google.
1181                # or website associated with this agent.
1182              "uri": "A String", # Required. The URI of the app or site to open when the user taps the
1183                  # suggestion chip.
1184              "destinationName": "A String", # Required. The name of the app or site this chip is linking to.
1185            },
1186            "basicCard": { # The basic card message. Useful for displaying information. # Displays a basic card for Actions on Google.
1187              "buttons": [ # Optional. The collection of card buttons.
1188                { # The button object that appears at the bottom of a card.
1189                  "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
1190                    "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
1191                  },
1192                  "title": "A String", # Required. The title of the button.
1193                },
1194              ],
1195              "formattedText": "A String", # Required, unless image is present. The body text of the card.
1196              "image": { # The image response message. # Optional. The image for the card.
1197                "accessibilityText": "A String", # A text description of the image to be used for accessibility,
1198                    # e.g., screen readers. Required if image_uri is set for CarouselSelect.
1199                "imageUri": "A String", # Optional. The public URI to an image file.
1200              },
1201              "subtitle": "A String", # Optional. The subtitle of the card.
1202              "title": "A String", # Optional. The title of the card.
1203            },
1204            "carouselSelect": { # The card for presenting a carousel of options to select from. # Displays a carousel card for Actions on Google.
1205              "items": [ # Required. Carousel items.
1206                { # An item in the carousel.
1207                  "info": { # Additional info about the select item for when it is triggered in a # Required. Additional info about the option item.
1208                      # dialog.
1209                    "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
1210                        # item in dialog.
1211                      "A String",
1212                    ],
1213                    "key": "A String", # Required. A unique key that will be sent back to the agent if this
1214                        # response is given.
1215                  },
1216                  "image": { # The image response message. # Optional. The image to display.
1217                    "accessibilityText": "A String", # A text description of the image to be used for accessibility,
1218                        # e.g., screen readers. Required if image_uri is set for CarouselSelect.
1219                    "imageUri": "A String", # Optional. The public URI to an image file.
1220                  },
1221                  "description": "A String", # Optional. The body text of the card.
1222                  "title": "A String", # Required. Title of the carousel item.
1223                },
1224              ],
1225            },
1226            "listSelect": { # The card for presenting a list of options to select from. # Displays a list card for Actions on Google.
1227              "items": [ # Required. List items.
1228                { # An item in the list.
1229                  "info": { # Additional info about the select item for when it is triggered in a # Required. Additional information about this option.
1230                      # dialog.
1231                    "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
1232                        # item in dialog.
1233                      "A String",
1234                    ],
1235                    "key": "A String", # Required. A unique key that will be sent back to the agent if this
1236                        # response is given.
1237                  },
1238                  "image": { # The image response message. # Optional. The image to display.
1239                    "accessibilityText": "A String", # A text description of the image to be used for accessibility,
1240                        # e.g., screen readers. Required if image_uri is set for CarouselSelect.
1241                    "imageUri": "A String", # Optional. The public URI to an image file.
1242                  },
1243                  "description": "A String", # Optional. The main text describing the item.
1244                  "title": "A String", # Required. The title of the list item.
1245                },
1246              ],
1247              "title": "A String", # Optional. The overall title of the list.
1248            },
1249            "telephonyTransferCall": { # Transfers the call in Telephony Gateway. # Transfers the call in Telephony Gateway.
1250              "phoneNumber": "A String", # Required. The phone number to transfer the call to
1251                  # in [E.164 format](https://en.wikipedia.org/wiki/E.164).
1252                  #
1253                  # We currently only allow transferring to US numbers (+1xxxyyyzzzz).
1254            },
1255            "payload": { # Returns a response containing a custom, platform-specific payload.
1256                # See the Intent.Message.Platform type for a description of the
1257                # structure that may be required for your platform.
1258              "a_key": "", # Properties of the object.
1259            },
1260            "card": { # The card response message. # Displays a card.
1261              "buttons": [ # Optional. The collection of card buttons.
1262                { # Optional. Contains information about a button.
1263                  "text": "A String", # Optional. The text to show on the button.
1264                  "postback": "A String", # Optional. The text to send back to the Dialogflow API or a URI to
1265                      # open.
1266                },
1267              ],
1268              "title": "A String", # Optional. The title of the card.
1269              "subtitle": "A String", # Optional. The subtitle of the card.
1270              "imageUri": "A String", # Optional. The public URI to an image file for the card.
1271            },
1272          },
1273        ],
1274        "action": "A String", # The action name from the matched intent.
1275        "intent": { # Represents an intent. # The intent that matched the conversational query. Some, not
1276            # all fields are filled in this message, including but not limited to:
1277            # `name`, `display_name` and `webhook_state`.
1278            # Intents convert a number of user expressions or patterns into an action. An
1279            # action is an extraction of a user command or sentence semantics.
1280          "isFallback": True or False, # Optional. Indicates whether this is a fallback intent.
1281          "mlDisabled": True or False, # Optional. Indicates whether Machine Learning is disabled for the intent.
1282              # Note: If `ml_disabled` setting is set to true, then this intent is not
1283              # taken into account during inference in `ML ONLY` match mode. Also,
1284              # auto-markup in the UI is turned off.
1285          "displayName": "A String", # Required. The name of this intent.
1286          "name": "A String", # The unique identifier of this intent.
1287              # Required for Intents.UpdateIntent and Intents.BatchUpdateIntents
1288              # methods.
1289              # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
1290          "parameters": [ # Optional. The collection of parameters associated with the intent.
1291            { # Represents intent parameters.
1292              "mandatory": True or False, # Optional. Indicates whether the parameter is required. That is,
1293                  # whether the intent cannot be completed without collecting the parameter
1294                  # value.
1295              "name": "A String", # The unique identifier of this parameter.
1296              "defaultValue": "A String", # Optional. The default value to use when the `value` yields an empty
1297                  # result.
1298                  # Default values can be extracted from contexts by using the following
1299                  # syntax: `#context_name.parameter_name`.
1300              "entityTypeDisplayName": "A String", # Optional. The name of the entity type, prefixed with `@`, that
1301                  # describes values of the parameter. If the parameter is
1302                  # required, this must be provided.
1303              "value": "A String", # Optional. The definition of the parameter value. It can be:
1304                  # - a constant string,
1305                  # - a parameter value defined as `$parameter_name`,
1306                  # - an original parameter value defined as `$parameter_name.original`,
1307                  # - a parameter value from some context defined as
1308                  #   `#context_name.parameter_name`.
1309              "prompts": [ # Optional. The collection of prompts that the agent can present to the
1310                  # user in order to collect value for the parameter.
1311                "A String",
1312              ],
1313              "isList": True or False, # Optional. Indicates whether the parameter represents a list of values.
1314              "displayName": "A String", # Required. The name of the parameter.
1315            },
1316          ],
1317          "trainingPhrases": [ # Optional. The collection of examples that the agent is
1318              # trained on.
1319            { # Represents an example that the agent is trained on.
1320              "parts": [ # Required. The ordered list of training phrase parts.
1321                  # The parts are concatenated in order to form the training phrase.
1322                  #
1323                  # Note: The API does not automatically annotate training phrases like the
1324                  # Dialogflow Console does.
1325                  #
1326                  # Note: Do not forget to include whitespace at part boundaries,
1327                  # so the training phrase is well formatted when the parts are concatenated.
1328                  #
1329                  # If the training phrase does not need to be annotated with parameters,
1330                  # you just need a single part with only the Part.text field set.
1331                  #
1332                  # If you want to annotate the training phrase, you must create multiple
1333                  # parts, where the fields of each part are populated in one of two ways:
1334                  #
1335                  # -   `Part.text` is set to a part of the phrase that has no parameters.
1336                  # -   `Part.text` is set to a part of the phrase that you want to annotate,
1337                  #     and the `entity_type`, `alias`, and `user_defined` fields are all
1338                  #     set.
1339                { # Represents a part of a training phrase.
1340                  "text": "A String", # Required. The text for this part.
1341                  "entityType": "A String", # Optional. The entity type name prefixed with `@`.
1342                      # This field is required for annotated parts of the training phrase.
1343                  "userDefined": True or False, # Optional. Indicates whether the text was manually annotated.
1344                      # This field is set to true when the Dialogflow Console is used to
1345                      # manually annotate the part. When creating an annotated part with the
1346                      # API, you must set this to true.
1347                  "alias": "A String", # Optional. The parameter name for the value extracted from the
1348                      # annotated part of the example.
1349                      # This field is required for annotated parts of the training phrase.
1350                },
1351              ],
1352              "type": "A String", # Required. The type of the training phrase.
1353              "name": "A String", # Output only. The unique identifier of this training phrase.
1354              "timesAddedCount": 42, # Optional. Indicates how many times this example was added to
1355                  # the intent. Each time a developer adds an existing sample by editing an
1356                  # intent or training, this counter is increased.
1357            },
1358          ],
1359          "followupIntentInfo": [ # Read-only. Information about all followup intents that have this intent as
1360              # a direct or indirect parent. We populate this field only in the output.
1361            { # Represents a single followup intent in the chain.
1362              "followupIntentName": "A String", # The unique identifier of the followup intent.
1363                  # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
1364              "parentFollowupIntentName": "A String", # The unique identifier of the followup intent's parent.
1365                  # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
1366            },
1367          ],
1368          "webhookState": "A String", # Optional. Indicates whether webhooks are enabled for the intent.
1369          "resetContexts": True or False, # Optional. Indicates whether to delete all contexts in the current
1370              # session when this intent is matched.
1371          "messages": [ # Optional. The collection of rich messages corresponding to the
1372              # `Response` field in the Dialogflow console.
1373            { # Corresponds to the `Response` field in the Dialogflow console.
1374              "simpleResponses": { # The collection of simple response candidates. # Returns a voice or text-only response for Actions on Google.
1375                  # This message in `QueryResult.fulfillment_messages` and
1376                  # `WebhookResponse.fulfillment_messages` should contain only one
1377                  # `SimpleResponse`.
1378                "simpleResponses": [ # Required. The list of simple responses.
1379                  { # The simple response message containing speech or text.
1380                    "textToSpeech": "A String", # One of text_to_speech or ssml must be provided. The plain text of the
1381                        # speech output. Mutually exclusive with ssml.
1382                    "displayText": "A String", # Optional. The text to display.
1383                    "ssml": "A String", # One of text_to_speech or ssml must be provided. Structured spoken
1384                        # response to the user in the SSML format. Mutually exclusive with
1385                        # text_to_speech.
1386                  },
1387                ],
1388              },
1389              "quickReplies": { # The quick replies response message. # Displays quick replies.
1390                "quickReplies": [ # Optional. The collection of quick replies.
1391                  "A String",
1392                ],
1393                "title": "A String", # Optional. The title of the collection of quick replies.
1394              },
1395              "platform": "A String", # Optional. The platform that this message is intended for.
1396              "text": { # The text response message. # Returns a text response.
1397                "text": [ # Optional. The collection of the agent's responses.
1398                  "A String",
1399                ],
1400              },
1401              "image": { # The image response message. # Displays an image.
1402                "accessibilityText": "A String", # A text description of the image to be used for accessibility,
1403                    # e.g., screen readers. Required if image_uri is set for CarouselSelect.
1404                "imageUri": "A String", # Optional. The public URI to an image file.
1405              },
1406              "telephonySynthesizeSpeech": { # Synthesizes speech and plays back the synthesized audio to the caller in # Synthesizes speech in Telephony Gateway.
1407                  # Telephony Gateway.
1408                  #
1409                  # Telephony Gateway takes the synthesizer settings from
1410                  # `DetectIntentResponse.output_audio_config` which can either be set
1411                  # at request-level or can come from the agent-level synthesizer config.
1412                "ssml": "A String", # The SSML to be synthesized. For more information, see
1413                    # [SSML](https://developers.google.com/actions/reference/ssml).
1414                "text": "A String", # The raw text to be synthesized.
1415              },
1416              "suggestions": { # The collection of suggestions. # Displays suggestion chips for Actions on Google.
1417                "suggestions": [ # Required. The list of suggested replies.
1418                  { # The suggestion chip message that the user can tap to quickly post a reply
1419                      # to the conversation.
1420                    "title": "A String", # Required. The text shown the in the suggestion chip.
1421                  },
1422                ],
1423              },
1424              "telephonyPlayAudio": { # Plays audio from a file in Telephony Gateway. # Plays audio from a file in Telephony Gateway.
1425                "audioUri": "A String", # Required. URI to a Google Cloud Storage object containing the audio to
1426                    # play, e.g., "gs://bucket/object". The object must contain a single
1427                    # channel (mono) of linear PCM audio (2 bytes / sample) at 8kHz.
1428                    #
1429                    # This object must be readable by the `service-<Project
1430                    # Number>@gcp-sa-dialogflow.iam.gserviceaccount.com` service account
1431                    # where <Project Number> is the number of the Telephony Gateway project
1432                    # (usually the same as the Dialogflow agent project). If the Google Cloud
1433                    # Storage bucket is in the Telephony Gateway project, this permission is
1434                    # added by default when enabling the Dialogflow V2 API.
1435                    #
1436                    # For audio from other sources, consider using the
1437                    # `TelephonySynthesizeSpeech` message with SSML.
1438              },
1439              "linkOutSuggestion": { # The suggestion chip message that allows the user to jump out to the app # Displays a link out suggestion chip for Actions on Google.
1440                  # or website associated with this agent.
1441                "uri": "A String", # Required. The URI of the app or site to open when the user taps the
1442                    # suggestion chip.
1443                "destinationName": "A String", # Required. The name of the app or site this chip is linking to.
1444              },
1445              "basicCard": { # The basic card message. Useful for displaying information. # Displays a basic card for Actions on Google.
1446                "buttons": [ # Optional. The collection of card buttons.
1447                  { # The button object that appears at the bottom of a card.
1448                    "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
1449                      "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
1450                    },
1451                    "title": "A String", # Required. The title of the button.
1452                  },
1453                ],
1454                "formattedText": "A String", # Required, unless image is present. The body text of the card.
1455                "image": { # The image response message. # Optional. The image for the card.
1456                  "accessibilityText": "A String", # A text description of the image to be used for accessibility,
1457                      # e.g., screen readers. Required if image_uri is set for CarouselSelect.
1458                  "imageUri": "A String", # Optional. The public URI to an image file.
1459                },
1460                "subtitle": "A String", # Optional. The subtitle of the card.
1461                "title": "A String", # Optional. The title of the card.
1462              },
1463              "carouselSelect": { # The card for presenting a carousel of options to select from. # Displays a carousel card for Actions on Google.
1464                "items": [ # Required. Carousel items.
1465                  { # An item in the carousel.
1466                    "info": { # Additional info about the select item for when it is triggered in a # Required. Additional info about the option item.
1467                        # dialog.
1468                      "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
1469                          # item in dialog.
1470                        "A String",
1471                      ],
1472                      "key": "A String", # Required. A unique key that will be sent back to the agent if this
1473                          # response is given.
1474                    },
1475                    "image": { # The image response message. # Optional. The image to display.
1476                      "accessibilityText": "A String", # A text description of the image to be used for accessibility,
1477                          # e.g., screen readers. Required if image_uri is set for CarouselSelect.
1478                      "imageUri": "A String", # Optional. The public URI to an image file.
1479                    },
1480                    "description": "A String", # Optional. The body text of the card.
1481                    "title": "A String", # Required. Title of the carousel item.
1482                  },
1483                ],
1484              },
1485              "listSelect": { # The card for presenting a list of options to select from. # Displays a list card for Actions on Google.
1486                "items": [ # Required. List items.
1487                  { # An item in the list.
1488                    "info": { # Additional info about the select item for when it is triggered in a # Required. Additional information about this option.
1489                        # dialog.
1490                      "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
1491                          # item in dialog.
1492                        "A String",
1493                      ],
1494                      "key": "A String", # Required. A unique key that will be sent back to the agent if this
1495                          # response is given.
1496                    },
1497                    "image": { # The image response message. # Optional. The image to display.
1498                      "accessibilityText": "A String", # A text description of the image to be used for accessibility,
1499                          # e.g., screen readers. Required if image_uri is set for CarouselSelect.
1500                      "imageUri": "A String", # Optional. The public URI to an image file.
1501                    },
1502                    "description": "A String", # Optional. The main text describing the item.
1503                    "title": "A String", # Required. The title of the list item.
1504                  },
1505                ],
1506                "title": "A String", # Optional. The overall title of the list.
1507              },
1508              "telephonyTransferCall": { # Transfers the call in Telephony Gateway. # Transfers the call in Telephony Gateway.
1509                "phoneNumber": "A String", # Required. The phone number to transfer the call to
1510                    # in [E.164 format](https://en.wikipedia.org/wiki/E.164).
1511                    #
1512                    # We currently only allow transferring to US numbers (+1xxxyyyzzzz).
1513              },
1514              "payload": { # Returns a response containing a custom, platform-specific payload.
1515                  # See the Intent.Message.Platform type for a description of the
1516                  # structure that may be required for your platform.
1517                "a_key": "", # Properties of the object.
1518              },
1519              "card": { # The card response message. # Displays a card.
1520                "buttons": [ # Optional. The collection of card buttons.
1521                  { # Optional. Contains information about a button.
1522                    "text": "A String", # Optional. The text to show on the button.
1523                    "postback": "A String", # Optional. The text to send back to the Dialogflow API or a URI to
1524                        # open.
1525                  },
1526                ],
1527                "title": "A String", # Optional. The title of the card.
1528                "subtitle": "A String", # Optional. The subtitle of the card.
1529                "imageUri": "A String", # Optional. The public URI to an image file for the card.
1530              },
1531            },
1532          ],
1533          "parentFollowupIntentName": "A String", # Read-only after creation. The unique identifier of the parent intent in the
1534              # chain of followup intents. You can set this field when creating an intent,
1535              # for example with CreateIntent or BatchUpdateIntents, in order to
1536              # make this intent a followup intent.
1537              #
1538              # It identifies the parent followup intent.
1539              # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
1540          "defaultResponsePlatforms": [ # Optional. The list of platforms for which the first responses will be
1541              # copied from the messages in PLATFORM_UNSPECIFIED (i.e. default platform).
1542            "A String",
1543          ],
1544          "priority": 42, # Optional. The priority of this intent. Higher numbers represent higher
1545              # priorities. If this is zero or unspecified, we use the default
1546              # priority 500000.
1547              #
1548              # Negative numbers mean that the intent is disabled.
1549          "rootFollowupIntentName": "A String", # Read-only. The unique identifier of the root intent in the chain of
1550              # followup intents. It identifies the correct followup intents chain for
1551              # this intent. We populate this field only in the output.
1552              #
1553              # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
1554          "endInteraction": True or False, # Optional. Indicates that this intent ends an interaction. Some integrations
1555              # (e.g., Actions on Google or Dialogflow phone gateway) use this information
1556              # to close interaction with an end user. Default is false.
1557          "inputContextNames": [ # Optional. The list of context names required for this intent to be
1558              # triggered.
1559              # Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
1560            "A String",
1561          ],
1562          "mlEnabled": True or False, # Optional. Indicates whether Machine Learning is enabled for the intent.
1563              # Note: If `ml_enabled` setting is set to false, then this intent is not
1564              # taken into account during inference in `ML ONLY` match mode. Also,
1565              # auto-markup in the UI is turned off.
1566              # DEPRECATED! Please use `ml_disabled` field instead.
1567              # NOTE: If both `ml_enabled` and `ml_disabled` are either not set or false,
1568              # then the default value is determined as follows:
1569              # - Before April 15th, 2018 the default is:
1570              #   ml_enabled = false / ml_disabled = true.
1571              # - After April 15th, 2018 the default is:
1572              #   ml_enabled = true / ml_disabled = false.
1573          "action": "A String", # Optional. The name of the action associated with the intent.
1574              # Note: The action name must not contain whitespaces.
1575          "outputContexts": [ # Optional. The collection of contexts that are activated when the intent
1576              # is matched. Context messages in this collection should not set the
1577              # parameters field. Setting the `lifespan_count` to 0 will reset the context
1578              # when the intent is matched.
1579              # Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
1580            { # Represents a context.
1581              "parameters": { # Optional. The collection of parameters associated with this context.
1582                  # Refer to [this
1583                  # doc](https://cloud.google.com/dialogflow-enterprise/docs/intents-actions-parameters)
1584                  # for syntax.
1585                "a_key": "", # Properties of the object.
1586              },
1587              "name": "A String", # Required. The unique identifier of the context. Format:
1588                  # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
1589                  # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
1590                  # ID>/sessions/<Session ID>/contexts/<Context ID>`.
1591                  #
1592                  # The `Context ID` is always converted to lowercase, may only contain
1593                  # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
1594                  #
1595                  # If `Environment ID` is not specified, we assume default 'draft'
1596                  # environment. If `User ID` is not specified, we assume default '-' user.
1597              "lifespanCount": 42, # Optional. The number of conversational query requests after which the
1598                  # context expires. If set to `0` (the default) the context expires
1599                  # immediately. Contexts expire automatically after 20 minutes if there
1600                  # are no matching queries.
1601            },
1602          ],
1603          "events": [ # Optional. The collection of event names that trigger the intent.
1604              # If the collection of input contexts is not empty, all of the contexts must
1605              # be present in the active user session for an event to trigger this intent.
1606            "A String",
1607          ],
1608        },
1609        "webhookPayload": { # If the query was fulfilled by a webhook call, this field is set to the
1610            # value of the `payload` field returned in the webhook response.
1611          "a_key": "", # Properties of the object.
1612        },
1613        "diagnosticInfo": { # The free-form diagnostic info. For example, this field could contain
1614            # webhook call latency. The string keys of the Struct's fields map can change
1615            # without notice.
1616          "a_key": "", # Properties of the object.
1617        },
1618        "queryText": "A String", # The original conversational query text:
1619            #
1620            # - If natural language text was provided as input, `query_text` contains
1621            #   a copy of the input.
1622            # - If natural language speech audio was provided as input, `query_text`
1623            #   contains the speech recognition result. If speech recognizer produced
1624            #   multiple alternatives, a particular one is picked.
1625            # - If automatic spell correction is enabled, `query_text` will contain the
1626            #   corrected user input.
1627        "outputContexts": [ # The collection of output contexts. If applicable,
1628            # `output_contexts.parameters` contains entries with name
1629            # `<parameter name>.original` containing the original parameter values
1630            # before the query.
1631          { # Represents a context.
1632            "parameters": { # Optional. The collection of parameters associated with this context.
1633                # Refer to [this
1634                # doc](https://cloud.google.com/dialogflow-enterprise/docs/intents-actions-parameters)
1635                # for syntax.
1636              "a_key": "", # Properties of the object.
1637            },
1638            "name": "A String", # Required. The unique identifier of the context. Format:
1639                # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
1640                # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
1641                # ID>/sessions/<Session ID>/contexts/<Context ID>`.
1642                #
1643                # The `Context ID` is always converted to lowercase, may only contain
1644                # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
1645                #
1646                # If `Environment ID` is not specified, we assume default 'draft'
1647                # environment. If `User ID` is not specified, we assume default '-' user.
1648            "lifespanCount": 42, # Optional. The number of conversational query requests after which the
1649                # context expires. If set to `0` (the default) the context expires
1650                # immediately. Contexts expire automatically after 20 minutes if there
1651                # are no matching queries.
1652          },
1653        ],
1654        "webhookSource": "A String", # If the query was fulfilled by a webhook call, this field is set to the
1655            # value of the `source` field returned in the webhook response.
1656        "allRequiredParamsPresent": True or False, # This field is set to:
1657            #
1658            # - `false` if the matched intent has required parameters and not all of
1659            #    the required parameter values have been collected.
1660            # - `true` if all required parameter values have been collected, or if the
1661            #    matched intent doesn't contain any required parameters.
1662      },
1663    ],
1664    "webhookStatus": { # The `Status` type defines a logical error model that is suitable for # Specifies the status of the webhook request.
1665        # different programming environments, including REST APIs and RPC APIs. It is
1666        # used by [gRPC](https://github.com/grpc). Each `Status` message contains
1667        # three pieces of data: error code, error message, and error details.
1668        #
1669        # You can find out more about this error model and how to work with it in the
1670        # [API Design Guide](https://cloud.google.com/apis/design/errors).
1671      "message": "A String", # A developer-facing error message, which should be in English. Any
1672          # user-facing error message should be localized and sent in the
1673          # google.rpc.Status.details field, or localized by the client.
1674      "code": 42, # The status code, which should be an enum value of google.rpc.Code.
1675      "details": [ # A list of messages that carry the error details.  There is a common set of
1676          # message types for APIs to use.
1677        {
1678          "a_key": "", # Properties of the object. Contains field @type with type URL.
1679        },
1680      ],
1681    },
1682  }</pre>
1683</div>
1684
1685</body></html>