Google Assistant (Conversational)

Integration published by Jovo | 5,815 downloads

Build Conversational Actions for Google's Assistant platform

Visual Output

Learn more about how to build Google Actions with visual output using the Jovo Framework.

Introduction to Visual Output

Visual output is used to describe or enhance the voice interaction.

Simple Response

With Simple Responses, you can provide basic text output in the form of chat bubbles to your users. They consist of visual text output and use SSML or TTS for sound.

Basic Card

Basic Cards are used for the most basic cases of visual output. They can be used to display plain text, images and a button in addition to the speech output.

Example Javascript | Example Typescript

Image Card

Image Cards represent a simpler alternative to Basic Cards, which you can use when you just want to present an image.

Table Card

Table Cards are used for displaying tabular data.

Example Javascript | Example Typescript

Selection

You can use one of the following selection types to let the user choose one out of several options as a response.

List

A List can be used to display a vertical list of selectable items. Lists must contain at least 2 items, but at most 30.

Each list provides an array of list items, containing keys for each item you want to show in your list. To add a list entry, you can utilize type overrides:

Collection

Contrary to Lists, Collections display a horizontal list of selectable items, that allow for richer content to be displayed. Collections must also contain at least 2 items, but only a maximum of 10 items.

Analogous to vertical lists, each collection provides an array of items, containing keys for each item you want to show in your list. To add an entry, you can utilize type overrides:

ON_ELEMENT_SELECTED

After the user selects one of the items in your visual selection, they will be redirected to the ON_ELEMENT_SELECTED intent, if available. There you can use this.getSelectedElementId() to get the key of the selected item:

Suggestion Chips

Use suggestion chips to add possible responses as a hint as to how the user can interact with your Conversational Action next.

Official Documentation

Example Javascript | Example Typescript

Google Assistant Changelog

Current version might be higher than the latest changes displayed below because of updates of dependencies.

2021-07-07 [3.5.4]
  • #948 ✨ Add enableFullScreen and continueTtsDuringTouch (@aswetlow)

2021-02-22 [3.5]

  • #901 Move setResponse from response to after.response middleware (@aswetlow)
  • #901 Fix missing unit test methods in ConversationalResponse (@aswetlow)
2021-02-04 [3.4.0]
  • #892 ✨ Transactions for Google Assistant Conversational Actions (@aswetlow)
2021-01-28 [3.3.2]
  • #890 ✨ Add Conversational Actions functionality to Jovo Debugger (@aswetlow)
2020-12-03 [3.3.0]
  • #871 Add missing and broken Google Conversational Action features (@aswetlow)
2020-11-20 [3.2.4]
2020-11-16 [3.2.3]
  • Adds Interactive Canvas to Google Conversational Actions (@aswetlow)
2020-11-10 [3.2.2]
  • #856 Fixes several Google Conversational Actions issues (@aswetlow)
2020-11-05 [3.2.1]

Fix missing locale in push notifications object

2020-09-29 [3.1.3]
  • #831 :recycle: Enhance Google AssisConversational Actions (@maswetlow)

2020-09-29 [3.1.0-alpha.0]

  • #829 ✨ Work In Progress: Google Assistant Conversational Actions (@aswetlow)

Join Our Newsletter

Be the first to get our free tutorials, courses, and other resources for voice app developers.