We're almost done. Our podcast player can play multiple episodes in a row, allows our users to pause, resume an episode as well as skip ahead or back to another episode. All that on both platforms with a minimal amount of code.
In this step, we will improve the user interaction at the launch.
- Updating the User Interaction
- Updating the Launch Intent
- Updating the Resume Intent
- Next Step
The idea for the user interaction is the following:
New users should get a list of four, in this case random, episodes to choose from beside the very first episode. Returning users will be asked if they want to continue where they left off or rather listen to the latest one.
What do we need for that?
First of all, we have to add an
FirstEpisodeIntent. In addition to that, we need an intent, which lists the four random episodes,
ListIntent, as well as one that gets triggered with by user's answer,
ChooseFromListIntent. Last but not least, we need a
LatestEpisodeIntent as well. That's it.
Let's do this step by step:
- Adding a FirstEpisodeIntent
- Adding a LatestEpisodeIntent
- Adding a ListIntent
- Adding a ChooseFromListIntent
Probably the easiest one. We first add the intent to our language model just like we did before:
The intent itself will use the
getFirstEpisode() method, save the current index to our database and send out a play directive:
Works almost completely the same way as
This is one will be a little trickier, but before we add the intent to our handler, let's add it to the language model first:
Here's a small example on how the interaction should look like in the end:
We first need to add a method to our
player.js file, which returns us
n random unique indices:
After that, we implement the intent
The intent will first get the random indices and save them in our session attributes. That's a way to store data, that we only need across a single session. Every time the conversation ends, the session attributes are reset so it's not a viable option to store data for longer periods of time. In our case, it works perfectly though.
We will need the indices to determine which episode the user chose, but more on that in a minute.
Besides that, we have to somehow communicate the list to our user in a simple form. The feature best suited for that is the Jovo SpeechBuilder, which is a tool to help us build more complex speech responses easier, by allowing us to assemble the response element by element.
Here's an example:
You can find the full list of features here.
In our case, it will look like this:
We add the pretext, use a for loop to get each episode using its index and add its title to our response and add the question at the end.
The complete intent looks like this:
This one is a little bit trickier than the previous one. The idea is to let the user answer with first, second one, etc., because it would be a little to demanding to expect them to use the episode's title.
For that, our intent will need an input type that recognizes ordinal numbers. With the Jovo Language Model we can specify separate input types for each platform. Since Google does provide a built-in one ordinal numbers,
@sys.ordinal, we can use that, but for Alexa, we have to create a custom one.
Every input type needs a
name, which is used to reference it later on. Besides that it needs possible values and optional synonyms for each value:
With every request, we will receive the input's name, it's key and value.
For example, if our user said I am from New York City and it triggered the
city input, our request will contain the following information:
Let's get to our implementation. The first thing we want to keep in mind is, that we stored our episode list inside an array, which we want to access using the index of the requested episode. Now, we have to somehow convert the utterance first to the integer
1. We can probably manage to do that in our intent, but that's way easier is to handle that inside our language model.
We simply define our input types values as
4, etc. and each values synonyms as for example
two. Now if our user says second we will also receive the input types key
2 in our request, which we can then use for our array.
Here's how that would look like:
We place the
inputTypes array at the same level as the rest of the stuff in our language model:
Now we can add the
ChooseFromListIntent and reference our custom input type as well as the Google Action built-in one:
Before we move on to add the intent to our handler, let's delete the
MyNameIsIntent as well. We don't need them. The current state of our language model looks like this:
We're done with the trickier part, as this one will be fairly easy again. We first get the indices array from our session attributes, use the input to get the correct index from it, save the index to our database, use it to get the episode data and last but not least send out a play directive.
Now we can use all of the intents to update the user interaction at app's launch. We want to have two different outputs depending if it is a new user or returning one. The easiest way to do that with Jovo is by using one of it's built-in intents called
NEW_USER, which, as the name says, every new user gets routed to at launch.
So we use the
NEW_USER intent to ask new users, if they want to choose an episode from a list or rather start with the very first one:
Depending on the user's choice, the
FirstEpisodeIntent will be invoked.
LAUNCH intent we will handle the interaction with returning users. The current logic inside the
LAUNCH intent is not needed anymore so we have to delete that and replace it with a question similar to the
NEW_USER intent's one:
With the updated
LAUNCH intent, we need to add a
ResumeIntent for our Google Action as well. Since we can't specify the offset, we will simply start playing the correct episode.
We add the intent to our Jovo Language Model and specify that we still use Amazon's built-in intent for Alexa. We also delete the
AMAZON.ResumeIntent inside the
alexa object of our language model.
Update the intent map:
Rename the intent in our handler from
ResumeIntent and fix its logic:
We've made big changes to the Jovo Language Model. These changes have to be pushed to developer consoles so run
$ jovo build --deploy and feel free to test out our implementation.
In the next and final step, we will refactor the project's structure, add small required intents and look ahead what else could be added to the project.