Method: projects.sendInteraction

Plays one round of the conversation.

HTTP request

POST https://actions.googleapis.com/v2/{project=projects/*}:sendInteraction

The URL uses gRPC Transcoding syntax.

Path parameters

Parameters
project

string

Required. The project being tested, indicated by the Project ID. Format: projects/{project}

Request body

The request body contains data with the following structure:

JSON representation
{
  "input": {
    object (UserInput)
  },
  "deviceProperties": {
    object (DeviceProperties)
  },
  "conversationToken": string
}
Fields
input

object (UserInput)

Required. Input provided by the user.

deviceProperties

object (DeviceProperties)

Required. Properties of the device used for interacting with the Action.

conversationToken

string

Opaque token that must be passed as received from SendInteractionResponse on the previous interaction. This can be left unset in order to start a new conversation, either as the first interaction of a testing session or to abandon a previous conversation and start a new one.

Response body

If successful, the response body contains data with the following structure:

Response to a round of the conversation.

JSON representation
{
  "output": {
    object (Output)
  },
  "diagnostics": {
    object (Diagnostics)
  },
  "conversationToken": string
}
Fields
output

object (Output)

Output provided to the user.

diagnostics

object (Diagnostics)

Diagnostics information that explains how the request was handled.

conversationToken

string

Opaque token to be set on SendInteractionRequest on the next RPC call in order to continue the same conversation.

UserInput

User input provided on a conversation round.

JSON representation
{
  "query": string,
  "type": enum (InputType)
}
Fields
query

string

Content of the input sent by the user.

type

enum (InputType)

Type of the input.

InputType

Indicates the input source, typed query or voice query.

Enums
INPUT_TYPE_UNSPECIFIED Unspecified input source.
TOUCH Query from a GUI interaction.
VOICE Voice query.
KEYBOARD Typed query.
URL The action was triggered by a URL link.

DeviceProperties

Properties of device relevant to a conversation round.

JSON representation
{
  "surface": enum (Surface),
  "location": {
    object (Location)
  },
  "locale": string,
  "timeZone": string
}
Fields
surface

enum (Surface)

Surface used for interacting with the Action.

location

object (Location)

Device location such as latitude, longitude, and formatted address.

locale

string

Locale as set on the device. The format should follow BCP 47: https://tools.ietf.org/html/bcp47 Examples: en, en-US, es-419 (more examples at https://tools.ietf.org/html/bcp47#appendix-A).

timeZone

string

Time zone as set on the device. The format should follow the IANA Time Zone Database, e.g. "America/New_York": https://www.iana.org/time-zones

Surface

Possible surfaces used to interact with the Action. Additional values may be included in the future.

Enums
SURFACE_UNSPECIFIED Default value. This value is unused.
SPEAKER Speaker (e.g. Google Home).
PHONE Phone.
ALLO Allo Chat.
SMART_DISPLAY Smart Display Device.
KAI_OS KaiOS.

Location

Container that represents a location.

JSON representation
{
  "coordinates": {
    object (LatLng)
  },
  "formattedAddress": string,
  "zipCode": string,
  "city": string
}
Fields
coordinates

object (LatLng)

Geo coordinates. Requires the [DEVICE_PRECISE_LOCATION] [google.actions.v2.Permission.DEVICE_PRECISE_LOCATION] permission.

formattedAddress

string

Display address, e.g., "1600 Amphitheatre Pkwy, Mountain View, CA 94043". Requires the [DEVICE_PRECISE_LOCATION] [google.actions.v2.Permission.DEVICE_PRECISE_LOCATION] permission.

zipCode

string

Zip code. Requires the [DEVICE_PRECISE_LOCATION] [google.actions.v2.Permission.DEVICE_PRECISE_LOCATION] or [DEVICE_COARSE_LOCATION] [google.actions.v2.Permission.DEVICE_COARSE_LOCATION] permission.

city

string

City. Requires the [DEVICE_PRECISE_LOCATION] [google.actions.v2.Permission.DEVICE_PRECISE_LOCATION] or [DEVICE_COARSE_LOCATION] [google.actions.v2.Permission.DEVICE_COARSE_LOCATION] permission.

LatLng

An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.

JSON representation
{
  "latitude": number,
  "longitude": number
}
Fields
latitude

number

The latitude in degrees. It must be in the range [-90.0, +90.0].

longitude

number

The longitude in degrees. It must be in the range [-180.0, +180.0].

Output

User-visible output to the conversation round.

JSON representation
{
  "text": string,
  "speech": [
    string
  ],
  "canvas": {
    object (Canvas)
  },
  "actionsBuilderPrompt": {
    object (Prompt)
  }
}
Fields
text

string

Spoken response sent to user as a plain string.

speech[]

string

Speech content produced by the Action. This may include markup elements such as SSML.

canvas

object (Canvas)

Interactive Canvas content.

actionsBuilderPrompt

object (Prompt)

State of the prompt at the end of the conversation round. More information about the prompt: https://developers.google.com/assistant/conversational/prompts

Canvas

Represents an Interactive Canvas response to be sent to the user. This can be used in conjunction with the "firstSimple" field in the containing prompt to speak to the user in addition to displaying a interactive canvas response. The maximum size of the response is 50k bytes.

JSON representation
{
  "url": string,
  "data": [
    value
  ],
  "suppressMic": boolean,
  "enableFullScreen": boolean
}
Fields
url

string

URL of the interactive canvas web app to load. If not set, the url from current active canvas will be reused.

data[]

value (Value format)

Optional. JSON data to be passed through to the immersive experience web page as an event. If the "override" field in the containing prompt is "false" data values defined in this Canvas prompt will be added after data values defined in previous Canvas prompts.

suppressMic

boolean

Optional. Default value: false.

enableFullScreen

boolean

If true the canvas application occupies the full screen and won't have a header at the top. A toast message will also be displayed on the loading screen that includes the Action's display name, the developer's name, and instructions for exiting the Action. Default value: false.

Prompt

Represent a response to a user.

JSON representation
{
  "append": boolean,
  "override": boolean,
  "firstSimple": {
    object (Simple)
  },
  "content": {
    object (Content)
  },
  "lastSimple": {
    object (Simple)
  },
  "suggestions": [
    {
      object (Suggestion)
    }
  ],
  "link": {
    object (Link)
  },
  "canvas": {
    object (Canvas)
  }
}
Fields
append
(deprecated)

boolean

Optional. Mode for how this messages should be merged with previously defined messages. "false" will clear all previously defined messages (first and last simple, content, suggestions link and canvas) and add messages defined in this prompt. "true" will add messages defined in this prompt to messages defined in previous responses. Setting this field to "true" will also enable appending to some fields inside Simple prompts, the Suggestion prompt and the Canvas prompt (part of the Content prompt). The Content and Link messages will always be overwritten if defined in the prompt. Default value is "false".

override

boolean

Optional. Mode for how this messages should be merged with previously defined messages. "true" clears all previously defined messages (first and last simple, content, suggestions link and canvas) and adds messages defined in this prompt. "false" adds messages defined in this prompt to messages defined in previous responses. Leaving this field to "false" also enables appending to some fields inside Simple prompts, the Suggestions prompt, and the Canvas prompt (part of the Content prompt). The Content and Link messages are always overwritten if defined in the prompt. Default value is "false".

firstSimple

object (Simple)

Optional. The first voice and text-only response.

content

object (Content)

Optional. A content like a card, list or media to display to the user.

lastSimple

object (Simple)

Optional. The last voice and text-only response.

suggestions[]

object (Suggestion)

Optional. Suggestions to be displayed to the user which will always appear at the end of the response. If the "override" field in the containing prompt is "false", the titles defined in this field will be added to titles defined in any previously defined suggestions prompts and duplicate values will be removed.

canvas

object (Canvas)

Optional. Represents a Interactive Canvas response to be sent to the user.

Simple

Represents a simple prompt to be send to a user.

JSON representation
{
  "speech": string,
  "text": string
}
Fields
speech

string

Optional. Represents the speech to be spoken to the user. Can be SSML or text to speech. If the "override" field in the containing prompt is "true", the speech defined in this field replaces the previous Simple prompt's speech.

text

string

Optional text to display in the chat bubble. If not given, a display rendering of the speech field above will be used. Limited to 640 chars. If the "override" field in the containing prompt is "true", the text defined in this field replaces to the previous Simple prompt's text.

Content

Content to be shown.

JSON representation
{

  // Union field content can be only one of the following:
  "card": {
    object (Card)
  },
  "image": {
    object (Image)
  },
  "table": {
    object (Table)
  },
  "media": {
    object (Media)
  },
  "canvas": {
    object (Canvas)
  },
  "collection": {
    object (Collection)
  },
  "list": {
    object (List)
  }
  // End of list of possible types for union field content.
}
Fields
Union field content. Content. content can be only one of the following:
card

object (Card)

A basic card.

image

object (Image)

An image.

table

object (Table)

Table card.

media

object (Media)

Response indicating a set of media to be played.

canvas
(deprecated)

object (Canvas)

A response to be used for interactive canvas experience.

collection

object (Collection)

A card presenting a collection of options to select from.

list

object (List)

A card presenting a list of options to select from.

Card

A basic card for displaying some information, e.g. an image and/or text.

JSON representation
{
  "title": string,
  "subtitle": string,
  "text": string,
  "image": {
    object (Image)
  },
  "imageFill": enum (ImageFill),
  "button": {
    object (Link)
  }
}
Fields
title

string

Overall title of the card. Optional.

subtitle

string

Optional.

text

string

Body text of the card. Supports a limited set of markdown syntax for formatting. Required, unless image is present.

image

object (Image)

A hero image for the card. The height is fixed to 192dp. Optional.

imageFill

enum (ImageFill)

How the image background will be filled. Optional.

button

object (Link)

Button. Optional.

Image

An image displayed in the card.

JSON representation
{
  "url": string,
  "alt": string,
  "height": integer,
  "width": integer
}
Fields
url

string

The source url of the image. Images can be JPG, PNG and GIF (animated and non-animated). For example,https://www.agentx.com/logo.png. Required.

alt

string

A text description of the image to be used for accessibility, e.g. screen readers. Required.

height

integer

The height of the image in pixels. Optional.

width

integer

The width of the image in pixels. Optional.

ImageFill

Possible image display options for affecting the presentation of the image. This should be used for when the image's aspect ratio does not match the image container's aspect ratio.

Enums
UNSPECIFIED Unspecified image fill.
GRAY Fill the gaps between the image and the image container with gray bars.
WHITE Fill the gaps between the image and the image container with white bars.
CROPPED Image is scaled such that the image width and height match or exceed the container dimensions. This may crop the top and bottom of the image if the scaled image height is greater than the container height, or crop the left and right of the image if the scaled image width is greater than the container width. This is similar to "Zoom Mode" on a widescreen TV when playing a 4:3 video.

OpenUrl

Action taken when a user opens a link.

JSON representation
{
  "url": string,
  "hint": enum (UrlHint)
}
Fields
url

string

The url field which could be any of: - http/https urls for opening an App-linked App or a webpage

hint

enum (UrlHint)

Indicates a hint for the url type.

UrlHint

Different types of url hints.

Enums
AMP URL that points directly to AMP content, or to a canonical URL which refers to AMP content via .

Table

A table card for displaying a table of text.

JSON representation
{
  "title": string,
  "subtitle": string,
  "image": {
    object (Image)
  },
  "columns": [
    {
      object (TableColumn)
    }
  ],
  "rows": [
    {
      object (TableRow)
    }
  ],
  "button": {
    object (Link)
  }
}
Fields
title

string

Overall title of the table. Optional but must be set if subtitle is set.

subtitle

string

Subtitle for the table. Optional.

image

object (Image)

Image associated with the table. Optional.

columns[]

object (TableColumn)

Headers and alignment of columns.

rows[]

object (TableRow)

Row data of the table. The first 3 rows are guaranteed to be shown but others might be cut on certain surfaces. Please test with the simulator to see which rows will be shown for a given surface. On surfaces that support the WEB_BROWSER capability, you can point the user to a web page with more data.

button

object (Link)

Button.

TableColumn

Describes a column in a table.

JSON representation
{
  "header": string,
  "align": enum (HorizontalAlignment)
}
Fields
header

string

Header text for the column.

align

enum (HorizontalAlignment)

Horizontal alignment of content w.r.t column. If unspecified, content will be aligned to the leading edge.

HorizontalAlignment

The alignment of the content within the cell.

Enums
UNSPECIFIED Unspecified horizontal alignment.
LEADING Leading edge of the cell. This is the default.
CENTER Content is aligned to the center of the column.
TRAILING Content is aligned to the trailing edge of the column.

TableRow

Describes a row in the table.

JSON representation
{
  "cells": [
    {
      object (TableCell)
    }
  ],
  "divider": boolean
}
Fields
cells[]

object (TableCell)

Cells in this row. The first 3 cells are guaranteed to be shown but others might be cut on certain surfaces. Please test with the simulator to see which cells will be shown for a given surface.

divider

boolean

Indicates whether there should be a divider after each row.

TableCell

Describes a cell in a row.

JSON representation
{
  "text": string
}
Fields
text

string

Text content of the cell.

Media

Represents one media object. Contains information about the media, such as name, description, url, etc.

JSON representation
{
  "mediaType": enum (MediaType),
  "startOffset": string,
  "optionalMediaControls": [
    enum (OptionalMediaControls)
  ],
  "mediaObjects": [
    {
      object (MediaObject)
    }
  ]
}
Fields
mediaType

enum (MediaType)

Media type.

startOffset

string (Duration format)

Start offset of the first media object.

A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s".

optionalMediaControls[]

enum (OptionalMediaControls)

Optional media control types this media response session can support. If set, request will be made to 3p when a certain media event happens. If not set, 3p must still handle two default control type, FINISHED and FAILED.

mediaObjects[]

object (MediaObject)

List of Media Objects

MediaType

Media type of this response.

Enums
MEDIA_TYPE_UNSPECIFIED Unspecified media type.
AUDIO Audio file.
MEDIA_STATUS_ACK Response to acknowledge a media status report.

OptionalMediaControls

Optional media control types the media response can support

Enums
OPTIONAL_MEDIA_CONTROLS_UNSPECIFIED Unspecified value
PAUSED Paused event. Triggered when user pauses the media.
STOPPED Stopped event. Triggered when user exits out of 3p session during media play.

MediaObject

Represents a single media object

JSON representation
{
  "name": string,
  "description": string,
  "url": string,
  "image": {
    object (MediaImage)
  }
}
Fields
name

string

Name of this media object.

description

string

Description of this media object.

url

string

The url pointing to the media content.

image

object (MediaImage)

Image to show with the media card.

MediaImage

Image to show with the media card.

JSON representation
{

  // Union field image can be only one of the following:
  "large": {
    object (Image)
  },
  "icon": {
    object (Image)
  }
  // End of list of possible types for union field image.
}
Fields
Union field image. Image. image can be only one of the following:
large

object (Image)

A large image, such as the cover of the album, etc.

icon

object (Image)

A small image icon displayed on the right from the title. It's resized to 36x36 dp.

Collection

A card for presenting a collection of options to select from.

JSON representation
{
  "title": string,
  "subtitle": string,
  "items": [
    {
      object (CollectionItem)
    }
  ],
  "imageFill": enum (ImageFill)
}
Fields
title

string

Title of the collection. Optional.

subtitle

string

Subtitle of the collection. Optional.

items[]

object (CollectionItem)

min: 2 max: 10

imageFill

enum (ImageFill)

How the image backgrounds of collection items will be filled. Optional.

CollectionItem

An item in the collection

JSON representation
{
  "key": string
}
Fields
key

string

Required. The NLU key that matches the entry key name in the associated Type.

List

A card for presenting a list of options to select from.

JSON representation
{
  "title": string,
  "subtitle": string,
  "items": [
    {
      object (ListItem)
    }
  ]
}
Fields
title

string

Title of the list. Optional.

subtitle

string

Subtitle of the list. Optional.

items[]

object (ListItem)

min: 2 max: 30

ListItem

An item in the list

JSON representation
{
  "key": string
}
Fields
key

string

Required. The NLU key that matches the entry key name in the associated Type.

Suggestion

Input suggestion to be presented to the user.

JSON representation
{
  "title": string
}
Fields
title

string

Required. The text shown in the suggestion chip. When tapped, this text will be posted back to the conversation verbatim as if the user had typed it. Each title must be unique among the set of suggestion chips. Max 25 chars

Diagnostics

Diagnostics information related to the conversation round.

JSON representation
{
  "actionsBuilderEvents": [
    {
      object (ExecutionEvent)
    }
  ]
}
Fields
actionsBuilderEvents[]

object (ExecutionEvent)

List of events with details about processing of the conversation round throughout the stages of the Actions Builder interaction model. Populated for Actions Builder & Actions SDK apps only.

ExecutionEvent

Contains information about execution event which happened during processing Actions Builder conversation request. For an overview of the stages involved in a conversation request, see https://developers.google.com/assistant/conversational/actions.

JSON representation
{
  "eventTime": string,
  "executionState": {
    object (ExecutionState)
  },
  "status": {
    object (Status)
  },
  "warningMessages": [
    string
  ],

  // Union field EventData can be only one of the following:
  "userInput": {
    object (UserConversationInput)
  },
  "intentMatch": {
    object (IntentMatch)
  },
  "conditionsEvaluated": {
    object (ConditionsEvaluated)
  },
  "onSceneEnter": {
    object (OnSceneEnter)
  },
  "webhookRequest": {
    object (WebhookRequest)
  },
  "webhookResponse": {
    object (WebhookResponse)
  },
  "webhookInitiatedTransition": {
    object (WebhookInitiatedTransition)
  },
  "slotMatch": {
    object (SlotMatch)
  },
  "slotRequested": {
    object (SlotRequested)
  },
  "slotValidated": {
    object (SlotValidated)
  },
  "formFilled": {
    object (FormFilled)
  },
  "waitingUserInput": {
    object (WaitingForUserInput)
  },
  "endConversation": {
    object (EndConversation)
  }
  // End of list of possible types for union field EventData.
}
Fields
eventTime

string (Timestamp format)

Timestamp when the event happened.

A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

executionState

object (ExecutionState)

State of the execution during this event.

status

object (Status)

Resulting status of particular execution step.

warningMessages[]

string

List of warnings generated during execution of this Event. Warnings are tips for the developer discovered during the conversation request. These are usually non-critical and do not halt the execution of the request. For example, a warnings might be generated when webhook tries to override a custom Type which does not exist. Errors are reported as a failed status code, but warnings can be present even when the status is OK.

Union field EventData. Detailed information specific to different of events that may be involved in processing a conversation round. The field set here defines the type of this event. EventData can be only one of the following:
userInput

object (UserConversationInput)

User input handling event.

intentMatch

object (IntentMatch)

Intent matching event.

conditionsEvaluated

object (ConditionsEvaluated)

Condition evaluation event.

onSceneEnter

object (OnSceneEnter)

OnSceneEnter execution event.

webhookRequest

object (WebhookRequest)

Webhook request dispatch event.

webhookResponse

object (WebhookResponse)

Webhook response receipt event.

webhookInitiatedTransition

object (WebhookInitiatedTransition)

Webhook-initiated transition event.

slotMatch

object (SlotMatch)

Slot matching event.

slotRequested

object (SlotRequested)

Slot requesting event.

slotValidated

object (SlotValidated)

Slot validation event.

formFilled

object (FormFilled)

Form filling event.

waitingUserInput

object (WaitingForUserInput)

Waiting-for-user-input event.

endConversation

object (EndConversation)

End-of-conversation event.

ExecutionState

Current state of the execution.

JSON representation
{
  "currentSceneId": string,
  "sessionStorage": {
    object
  },
  "slots": {
    object (Slots)
  },
  "promptQueue": [
    {
      object (Prompt)
    }
  ],
  "userStorage": {
    object
  },
  "householdStorage": {
    object
  }
}
Fields
currentSceneId

string

ID of the scene which is currently active.

sessionStorage

object (Struct format)

State of the session storage: https://developers.google.com/assistant/conversational/storage-session

slots

object (Slots)

State of the slots filling, if applicable: https://developers.google.com/assistant/conversational/scenes#slot_filling

promptQueue[]

object (Prompt)

Prompt queue: https://developers.google.com/assistant/conversational/prompts

userStorage

object (Struct format)

State of the user storage: https://developers.google.com/assistant/conversational/storage-user

householdStorage

object (Struct format)

State of the home storage: https://developers.google.com/assistant/conversational/storage-home

Slots

Represents the current state of a the scene's slots.

JSON representation
{
  "status": enum (SlotFillingStatus),
  "slots": {
    string: {
      object (Slot)
    },
    ...
  }
}
Fields
status

enum (SlotFillingStatus)

The current status of slot filling.

slots

map (key: string, value: object (Slot))

The slots associated with the current scene.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

SlotFillingStatus

Represents the current status of slot filling.

Enums
UNSPECIFIED Fallback value when the usage field is not populated.
INITIALIZED The slots have been initialized but slot filling has not started.
COLLECTING The slot values are being collected.
FINAL All slot values are final and cannot be changed.

Slot

Represents a slot.

JSON representation
{
  "mode": enum (SlotMode),
  "status": enum (SlotStatus),
  "value": value,
  "updated": boolean,
  "prompt": {
    object (Prompt)
  }
}
Fields
mode

enum (SlotMode)

The mode of the slot (required or optional). Can be set by developer.

status

enum (SlotStatus)

The status of the slot.

value

value (Value format)

The value of the slot. Changing this value in the response, will modify the value in slot filling.

updated

boolean

Indicates if the slot value was collected on the last turn. This field is read-only.

prompt

object (Prompt)

Optional. This prompt is sent to the user when needed to fill a required slot. This prompt overrides the existing prompt defined in the console. This field is not included in the webhook request.

SlotMode

Represents the mode of a slot, that is, if it is required or not.

Enums
MODE_UNSPECIFIED Fallback value when the usage field is not populated.
OPTIONAL Indicates that the slot is not required to complete slot filling.
REQUIRED Indicates that the slot is required to complete slot filling.

SlotStatus

Represents the status of a slot.

Enums
SLOT_UNSPECIFIED Fallback value when the usage field is not populated.
EMPTY Indicates that the slot does not have any values. This status cannot be modified through the response.
INVALID Indicates that the slot value is invalid. This status can be set through the response.
FILLED Indicates that the slot has a value. This status cannot be modified through the response.

Status

The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details.

You can find out more about this error model and how to work with it in the API Design Guide.

JSON representation
{
  "code": integer,
  "message": string,
  "details": [
    {
      "@type": string,
      field1: ...,
      ...
    }
  ]
}
Fields
code

integer

The status code, which should be an enum value of google.rpc.Code.

message

string

A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

details[]

object

A list of messages that carry the error details. There is a common set of message types for APIs to use.

An object containing fields of an arbitrary type. An additional field "@type" contains a URI identifying the type. Example: { "id": 1234, "@type": "types.example.com/standard/id" }.

UserConversationInput

Information related to user input.

JSON representation
{
  "type": string,
  "originalQuery": string
}
Fields
type

string

Type of user input. E.g. keyboard, voice, touch, etc.

originalQuery

string

Original text input from the user.

IntentMatch

Information about triggered intent match (global or within a scene): https://developers.google.com/assistant/conversational/intents

JSON representation
{
  "intentId": string,
  "intentParameters": {
    string: {
      object (IntentParameterValue)
    },
    ...
  },
  "handler": string,
  "nextSceneId": string
}
Fields
intentId

string

Intent id which triggered this interaction.

intentParameters

map (key: string, value: object (IntentParameterValue))

Parameters of intent which triggered this interaction.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

handler

string

Name of the handler attached to this interaction.

nextSceneId

string

Scene to which this interaction leads to.

ConditionsEvaluated

Results of conditions evaluation: https://developers.google.com/assistant/conversational/scenes#conditions

JSON representation
{
  "failedConditions": [
    {
      object (Condition)
    }
  ],
  "successCondition": {
    object (Condition)
  }
}
Fields
failedConditions[]

object (Condition)

List of conditions which were evaluated to 'false'.

successCondition

object (Condition)

The first condition which was evaluated to 'true', if any.

Condition

Evaluated condition.

JSON representation
{
  "expression": string,
  "handler": string,
  "nextSceneId": string
}
Fields
expression

string

Expression specified in this condition.

handler

string

Handler name specified in evaluated condition.

nextSceneId

string

Destination scene specified in evaluated condition.

OnSceneEnter

Information about execution of onSceneEnter stage: https://developers.google.com/assistant/conversational/scenes#onEnter

JSON representation
{
  "handler": string
}
Fields
handler

string

Handler name specified in onSceneEnter event.

WebhookRequest

Information about a request dispatched to the Action webhook: https://developers.google.com/assistant/conversational/webhooks#payloads

JSON representation
{
  "requestJson": string
}
Fields
requestJson

string

Payload of the webhook request.

WebhookResponse

Information about a response received from the Action webhook: https://developers.google.com/assistant/conversational/webhooks#payloads

JSON representation
{
  "responseJson": string
}
Fields
responseJson

string

Payload of the webhook response.

WebhookInitiatedTransition

Event triggered by destination scene returned from webhook: https://developers.google.com/assistant/conversational/webhooks#transition_scenes

JSON representation
{
  "nextSceneId": string
}
Fields
nextSceneId

string

ID of the scene the transition is leading to.

SlotMatch

Information about matched slot(s): https://developers.google.com/assistant/conversational/scenes#slot_filling

JSON representation
{
  "nluParameters": {
    string: {
      object (IntentParameterValue)
    },
    ...
  }
}
Fields
nluParameters

map (key: string, value: object (IntentParameterValue))

Parameters extracted by NLU from user input.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

SlotRequested

Information about currently requested slot: https://developers.google.com/assistant/conversational/scenes#slot_filling

JSON representation
{
  "slot": string,
  "prompt": {
    object (Prompt)
  }
}
Fields
slot

string

Name of the requested slot.

prompt

object (Prompt)

Slot prompt.

SlotValidated

Event which happens after webhook validation was finished for slot(s): https://developers.google.com/assistant/conversational/scenes#slot_filling

FormFilled

Event which happens when form is fully filled: https://developers.google.com/assistant/conversational/scenes#slot_filling

WaitingForUserInput

Event which happens when system needs user input: https://developers.google.com/assistant/conversational/scenes#input

EndConversation

Event which informs that conversation with agent was ended.