Interactive Canvas

Figure 1. An interactive game built using Interactive Canvas.

Interactive Canvas is a framework built on Google Assistant that allows developers to add visual, immersive experiences to Conversational Actions. This visual experience is an interactive web app that Assistant sends as a response to the user in conversation. Unlike rich responses that exist in-line in an Assistant conversation, the Interactive Canvas web app renders as a full-screen web view.

Use Interactive Canvas if you want to do any of the following in your Action:

  • Create full-screen visuals
  • Create custom animations and transitions
  • Do data visualization
  • Create custom layouts and GUIs

Supported devices

Interactive Canvas is currently available on the following devices:

  • Smart displays
  • Android mobile devices

How it works

An Action that uses Interactive Canvas consists of two main components:

  • Conversational Action: An Action that uses a conversational interface to fulfill user requests. You can use either Actions Builder or the Actions SDK to build your conversation.
  • Web app: A front-end web app with customized visuals that your Action sends as a response to users during a conversation. You build the web app with web technologies like HTML, JavaScript, and CSS.

Users interacting with an Interactive Canvas Action have a back-and-forth conversation with Google Assistant to fulfill their goal. However, for Interactive Canvas, the bulk of this conversation occurs within the context of your web app. When connecting your Conversational Action to your web app, you must include the Interactive Canvas API in your web app code.

  • Interactive Canvas library: A JavaScript library that you include in the web app to enable communication between the web app and your Conversational Action using an API. For more information, see the Interactive Canvas API documentation.

In addition to including the Interactive Canvas library, you must return the Canvas response type in your conversation to open your web app on the user's device. You can also use a Canvas response to update your web app based on the user's input.

To illustrate how Interactive Canvas works, imagine a hypothetical Action called Cool Colors that changes the device screen color to a color the user specifies. After the user invokes the Action, the following flow happens:

  1. The user says, "Turn the screen blue." to the Assistant device.
  2. The Actions on Google platform routes the user's request to your conversational logic to match an intent.
  3. The platform matches the intent with the Action's scene, which triggers an event and calls the corresponding webhook event handler. The platform then sends a Canvas response to the device. The device loads a web app using a URL provided in the response (if it has not yet been loaded).
  4. When the web app loads, it registers callbacks with the Interactive Canvas API. If the Canvas response contains a data field, the object value of the data field is passed into the registered onUpdate callback of the web app. In this example, the fulfillment sends a Canvas response with a data field that includes a variable with the value of blue.
  5. Upon receiving the data value of the Canvas response, the onUpdate callback can execute custom logic for your web app and make the defined changes. In this example, the onUpdate callback reads the color from data and turns the screen blue.

Next steps

To learn how to build a web app for Interactive Canvas, see Web apps.

To see the code for a complete Interactive Canvas Action, see the sample on GitHub.