It's time to add the last bit of controls to our Alexa Skill and Google Action podcast player that the user needs. They can already pause and resume the audio stream and now we will allow them to manually switch to the next or previous episode.
We mentioned earlier, the Jovo Language Model allows us to maintain only a single file, which can be used to create the platform language models with the help of the Jovo CLI.
Currently our model looks like this:
Every custom intent in the Jovo language Model has to have two properties, a name as well as sample phrases:
Our own intents, which we will call Next- and PreviousIntent, will follow the same format, but with a small difference. If you remember, Amazon provides built-in intents for the AudioPlayer, which the user can invoke without us even including them in our language model or using our skill's invocation name.
AMAZON.PreviousIntent also belong to this list and they "override" our custom intents. If custom intents share the same utterances as a built-in intent, in most, if not all, cases the built-in intent will be invoked. So in our case, if the Alexa user says Alexa, next episode, the request will include the
AMAZON.NextIntent rather than our own
NextIntent and we will encounter an error.
For these cases, we can specify inside our Jovo Language Model, that the intent we're creating is already implemented as a built-in intent in Alexa, in which case the Jovo CLI will use the built-in intent while building the platform files, e.g.
AMAZON.NextIntent will be used for Alexa and
NextIntent for Google Action:
Some of these built-in intents are also expendable, which is why Jovo will use the phrases we specified to extend the built-in intent by default. But, that only works with some of the intents. In our case is does not, so we specify that we don't want to extend the intent by adding an empty
Our language model should look like this now:
That's all we need for now. Now we run the following commands o create and deploy the respective platform's files:
Before we add the NextIntent and PreviousIntent to our handler, we have to first make some adjustments to the configuration of our project. Technically, our Jovo Language Model contains four different intents:
If one of our Alexa users says Alexa, next or Alexa, previous one it will invoke the
AMAZON.PreviousIntent, while Google Assistant users trigger the
PreviousIntent, which means our handler has to have all four intents:
That's redundant code. To help with that Jovo offers a simple mapping function for intents. We can specify which intents should be mapped, or routed, to another intent automatically, using the
intentMap inside the
Docs: Jovo intentMap.
This way we have to only have a single
PreviousIntent inside our handler.
The actual logic of both intents is fairly easy. We get the current episode's index from the database, get the next/previous episode, save the new index and send out a play directive:
While we are at it, we should check for a corner case that might cause an error. What if, for example, the user is currently at the last track and they want to skip ahead to next one? We have to inform them that it's not possible.
undefined if we try to access an array at an index without a value, our
getNextEpisode() methods will also return
undefined if we are at the first/last episode. So simply before we try to send out a play directive we check for that and respond accordingly:
In this case a
return statement is used to stop the execution of the remaining code.
Before you can test everything out, don't forget to remove the
AMAZON.PreviousIntent, which we added in step four.
In the next step, we will rework the user interaction at the app's launch!